00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2384 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3649 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.119 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.145 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.173 Using shallow fetch with depth 1 00:00:00.173 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.173 > git --version # timeout=10 00:00:00.204 > git --version # 'git version 2.39.2' 00:00:00.204 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.234 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.234 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.200 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.212 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.223 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.223 > git config core.sparsecheckout # timeout=10 00:00:05.235 > git read-tree -mu HEAD # timeout=10 00:00:05.251 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.272 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.272 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.413 [Pipeline] Start of Pipeline 00:00:05.426 [Pipeline] library 00:00:05.428 Loading library shm_lib@master 00:00:05.428 Library shm_lib@master is cached. Copying from home. 00:00:05.443 [Pipeline] node 00:00:05.457 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:05.459 [Pipeline] { 00:00:05.467 [Pipeline] catchError 00:00:05.468 [Pipeline] { 00:00:05.477 [Pipeline] wrap 00:00:05.485 [Pipeline] { 00:00:05.492 [Pipeline] stage 00:00:05.494 [Pipeline] { (Prologue) 00:00:05.511 [Pipeline] echo 00:00:05.513 Node: VM-host-WFP7 00:00:05.518 [Pipeline] cleanWs 00:00:05.526 [WS-CLEANUP] Deleting project workspace... 00:00:05.526 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.532 [WS-CLEANUP] done 00:00:05.717 [Pipeline] setCustomBuildProperty 00:00:05.807 [Pipeline] httpRequest 00:00:06.101 [Pipeline] echo 00:00:06.102 Sorcerer 10.211.164.20 is alive 00:00:06.110 [Pipeline] retry 00:00:06.112 [Pipeline] { 00:00:06.125 [Pipeline] httpRequest 00:00:06.129 HttpMethod: GET 00:00:06.130 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.130 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.136 Response Code: HTTP/1.1 200 OK 00:00:06.136 Success: Status code 200 is in the accepted range: 200,404 00:00:06.136 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.906 [Pipeline] } 00:00:06.921 [Pipeline] // retry 00:00:06.930 [Pipeline] sh 00:00:07.211 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.225 [Pipeline] httpRequest 00:00:07.778 [Pipeline] echo 00:00:07.780 Sorcerer 10.211.164.20 is alive 00:00:07.787 [Pipeline] retry 00:00:07.789 [Pipeline] { 00:00:07.800 [Pipeline] httpRequest 00:00:07.805 HttpMethod: GET 00:00:07.805 URL: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:07.806 Sending request to url: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:07.807 Response Code: HTTP/1.1 200 OK 00:00:07.807 Success: Status code 200 is in the accepted range: 200,404 00:00:07.808 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:31.326 [Pipeline] } 00:00:31.345 [Pipeline] // retry 00:00:31.353 [Pipeline] sh 00:00:31.647 + tar --no-same-owner -xf spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:34.204 [Pipeline] sh 00:00:34.491 + git -C spdk log --oneline -n5 00:00:34.491 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:00:34.491 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:00:34.491 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:00:34.491 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:00:34.491 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:00:34.513 [Pipeline] withCredentials 00:00:34.525 > git --version # timeout=10 00:00:34.541 > git --version # 'git version 2.39.2' 00:00:34.560 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:34.562 [Pipeline] { 00:00:34.572 [Pipeline] retry 00:00:34.574 [Pipeline] { 00:00:34.590 [Pipeline] sh 00:00:34.876 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:35.149 [Pipeline] } 00:00:35.172 [Pipeline] // retry 00:00:35.178 [Pipeline] } 00:00:35.194 [Pipeline] // withCredentials 00:00:35.205 [Pipeline] httpRequest 00:00:35.609 [Pipeline] echo 00:00:35.611 Sorcerer 10.211.164.20 is alive 00:00:35.622 [Pipeline] retry 00:00:35.625 [Pipeline] { 00:00:35.639 [Pipeline] httpRequest 00:00:35.645 HttpMethod: GET 00:00:35.646 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:35.646 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:35.654 Response Code: HTTP/1.1 200 OK 00:00:35.654 Success: Status code 200 is in the accepted range: 200,404 00:00:35.655 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:21.803 [Pipeline] } 00:01:21.821 [Pipeline] // retry 00:01:21.829 [Pipeline] sh 00:01:22.114 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:23.509 [Pipeline] sh 00:01:23.793 + git -C dpdk log --oneline -n5 00:01:23.793 caf0f5d395 version: 22.11.4 00:01:23.793 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:23.793 dc9c799c7d vhost: fix missing spinlock unlock 00:01:23.793 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:23.793 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:23.814 [Pipeline] writeFile 00:01:23.829 [Pipeline] sh 00:01:24.115 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:24.128 [Pipeline] sh 00:01:24.411 + cat autorun-spdk.conf 00:01:24.411 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.411 SPDK_RUN_ASAN=1 00:01:24.411 SPDK_RUN_UBSAN=1 00:01:24.411 SPDK_TEST_RAID=1 00:01:24.411 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:24.411 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:24.411 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.418 RUN_NIGHTLY=1 00:01:24.421 [Pipeline] } 00:01:24.433 [Pipeline] // stage 00:01:24.449 [Pipeline] stage 00:01:24.451 [Pipeline] { (Run VM) 00:01:24.465 [Pipeline] sh 00:01:24.750 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:24.750 + echo 'Start stage prepare_nvme.sh' 00:01:24.750 Start stage prepare_nvme.sh 00:01:24.750 + [[ -n 4 ]] 00:01:24.750 + disk_prefix=ex4 00:01:24.750 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:24.750 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:24.750 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:24.750 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.750 ++ SPDK_RUN_ASAN=1 00:01:24.750 ++ SPDK_RUN_UBSAN=1 00:01:24.750 ++ SPDK_TEST_RAID=1 00:01:24.750 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:24.750 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:24.750 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.750 ++ RUN_NIGHTLY=1 00:01:24.750 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:24.750 + nvme_files=() 00:01:24.750 + declare -A nvme_files 00:01:24.750 + backend_dir=/var/lib/libvirt/images/backends 00:01:24.750 + nvme_files['nvme.img']=5G 00:01:24.750 + nvme_files['nvme-cmb.img']=5G 00:01:24.750 + nvme_files['nvme-multi0.img']=4G 00:01:24.750 + nvme_files['nvme-multi1.img']=4G 00:01:24.750 + nvme_files['nvme-multi2.img']=4G 00:01:24.750 + nvme_files['nvme-openstack.img']=8G 00:01:24.750 + nvme_files['nvme-zns.img']=5G 00:01:24.750 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:24.750 + (( SPDK_TEST_FTL == 1 )) 00:01:24.750 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:24.750 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:24.750 + for nvme in "${!nvme_files[@]}" 00:01:24.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:24.750 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:24.750 + for nvme in "${!nvme_files[@]}" 00:01:24.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:24.750 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.750 + for nvme in "${!nvme_files[@]}" 00:01:24.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:24.750 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:24.750 + for nvme in "${!nvme_files[@]}" 00:01:24.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:24.750 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.750 + for nvme in "${!nvme_files[@]}" 00:01:24.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:24.750 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:24.750 + for nvme in "${!nvme_files[@]}" 00:01:24.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:24.750 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:24.750 + for nvme in "${!nvme_files[@]}" 00:01:24.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:25.319 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.319 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:25.319 + echo 'End stage prepare_nvme.sh' 00:01:25.319 End stage prepare_nvme.sh 00:01:25.331 [Pipeline] sh 00:01:25.612 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:25.612 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:25.612 00:01:25.612 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:25.612 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:25.612 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:25.612 HELP=0 00:01:25.612 DRY_RUN=0 00:01:25.612 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:25.612 NVME_DISKS_TYPE=nvme,nvme, 00:01:25.612 NVME_AUTO_CREATE=0 00:01:25.612 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:25.612 NVME_CMB=,, 00:01:25.612 NVME_PMR=,, 00:01:25.612 NVME_ZNS=,, 00:01:25.612 NVME_MS=,, 00:01:25.612 NVME_FDP=,, 00:01:25.612 SPDK_VAGRANT_DISTRO=fedora39 00:01:25.612 SPDK_VAGRANT_VMCPU=10 00:01:25.612 SPDK_VAGRANT_VMRAM=12288 00:01:25.612 SPDK_VAGRANT_PROVIDER=libvirt 00:01:25.612 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:25.612 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:25.612 SPDK_OPENSTACK_NETWORK=0 00:01:25.612 VAGRANT_PACKAGE_BOX=0 00:01:25.612 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:25.612 FORCE_DISTRO=true 00:01:25.612 VAGRANT_BOX_VERSION= 00:01:25.612 EXTRA_VAGRANTFILES= 00:01:25.612 NIC_MODEL=virtio 00:01:25.612 00:01:25.612 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:25.612 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:27.535 Bringing machine 'default' up with 'libvirt' provider... 00:01:28.123 ==> default: Creating image (snapshot of base box volume). 00:01:28.123 ==> default: Creating domain with the following settings... 00:01:28.123 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732108509_102f02bdaa65ce526ee8 00:01:28.123 ==> default: -- Domain type: kvm 00:01:28.123 ==> default: -- Cpus: 10 00:01:28.123 ==> default: -- Feature: acpi 00:01:28.123 ==> default: -- Feature: apic 00:01:28.123 ==> default: -- Feature: pae 00:01:28.123 ==> default: -- Memory: 12288M 00:01:28.124 ==> default: -- Memory Backing: hugepages: 00:01:28.124 ==> default: -- Management MAC: 00:01:28.124 ==> default: -- Loader: 00:01:28.124 ==> default: -- Nvram: 00:01:28.124 ==> default: -- Base box: spdk/fedora39 00:01:28.124 ==> default: -- Storage pool: default 00:01:28.124 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732108509_102f02bdaa65ce526ee8.img (20G) 00:01:28.124 ==> default: -- Volume Cache: default 00:01:28.124 ==> default: -- Kernel: 00:01:28.124 ==> default: -- Initrd: 00:01:28.124 ==> default: -- Graphics Type: vnc 00:01:28.124 ==> default: -- Graphics Port: -1 00:01:28.124 ==> default: -- Graphics IP: 127.0.0.1 00:01:28.124 ==> default: -- Graphics Password: Not defined 00:01:28.124 ==> default: -- Video Type: cirrus 00:01:28.124 ==> default: -- Video VRAM: 9216 00:01:28.124 ==> default: -- Sound Type: 00:01:28.124 ==> default: -- Keymap: en-us 00:01:28.124 ==> default: -- TPM Path: 00:01:28.124 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:28.124 ==> default: -- Command line args: 00:01:28.124 ==> default: -> value=-device, 00:01:28.124 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:28.124 ==> default: -> value=-drive, 00:01:28.124 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:28.124 ==> default: -> value=-device, 00:01:28.124 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.124 ==> default: -> value=-device, 00:01:28.124 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:28.124 ==> default: -> value=-drive, 00:01:28.124 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:28.124 ==> default: -> value=-device, 00:01:28.124 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.124 ==> default: -> value=-drive, 00:01:28.124 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:28.124 ==> default: -> value=-device, 00:01:28.124 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.124 ==> default: -> value=-drive, 00:01:28.124 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:28.124 ==> default: -> value=-device, 00:01:28.124 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.383 ==> default: Creating shared folders metadata... 00:01:28.383 ==> default: Starting domain. 00:01:30.293 ==> default: Waiting for domain to get an IP address... 00:01:48.399 ==> default: Waiting for SSH to become available... 00:01:49.780 ==> default: Configuring and enabling network interfaces... 00:01:56.354 default: SSH address: 192.168.121.188:22 00:01:56.354 default: SSH username: vagrant 00:01:56.354 default: SSH auth method: private key 00:01:59.647 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:06.226 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:14.359 ==> default: Mounting SSHFS shared folder... 00:02:15.741 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:15.741 ==> default: Checking Mount.. 00:02:17.652 ==> default: Folder Successfully Mounted! 00:02:17.652 ==> default: Running provisioner: file... 00:02:18.593 default: ~/.gitconfig => .gitconfig 00:02:19.164 00:02:19.164 SUCCESS! 00:02:19.164 00:02:19.164 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:19.164 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:19.164 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:19.164 00:02:19.175 [Pipeline] } 00:02:19.191 [Pipeline] // stage 00:02:19.200 [Pipeline] dir 00:02:19.201 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:19.202 [Pipeline] { 00:02:19.215 [Pipeline] catchError 00:02:19.218 [Pipeline] { 00:02:19.230 [Pipeline] sh 00:02:19.514 + vagrant ssh-config --host vagrant 00:02:19.514 + sed -ne /^Host/,$p 00:02:19.514 + tee ssh_conf 00:02:22.144 Host vagrant 00:02:22.144 HostName 192.168.121.188 00:02:22.144 User vagrant 00:02:22.144 Port 22 00:02:22.144 UserKnownHostsFile /dev/null 00:02:22.144 StrictHostKeyChecking no 00:02:22.144 PasswordAuthentication no 00:02:22.144 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:22.144 IdentitiesOnly yes 00:02:22.144 LogLevel FATAL 00:02:22.144 ForwardAgent yes 00:02:22.144 ForwardX11 yes 00:02:22.144 00:02:22.160 [Pipeline] withEnv 00:02:22.163 [Pipeline] { 00:02:22.179 [Pipeline] sh 00:02:22.465 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:22.465 source /etc/os-release 00:02:22.465 [[ -e /image.version ]] && img=$(< /image.version) 00:02:22.465 # Minimal, systemd-like check. 00:02:22.465 if [[ -e /.dockerenv ]]; then 00:02:22.465 # Clear garbage from the node's name: 00:02:22.465 # agt-er_autotest_547-896 -> autotest_547-896 00:02:22.465 # $HOSTNAME is the actual container id 00:02:22.465 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:22.465 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:22.465 # We can assume this is a mount from a host where container is running, 00:02:22.465 # so fetch its hostname to easily identify the target swarm worker. 00:02:22.465 container="$(< /etc/hostname) ($agent)" 00:02:22.465 else 00:02:22.465 # Fallback 00:02:22.465 container=$agent 00:02:22.465 fi 00:02:22.465 fi 00:02:22.465 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:22.465 00:02:22.739 [Pipeline] } 00:02:22.758 [Pipeline] // withEnv 00:02:22.766 [Pipeline] setCustomBuildProperty 00:02:22.782 [Pipeline] stage 00:02:22.784 [Pipeline] { (Tests) 00:02:22.801 [Pipeline] sh 00:02:23.087 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:23.361 [Pipeline] sh 00:02:23.643 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:23.919 [Pipeline] timeout 00:02:23.919 Timeout set to expire in 1 hr 30 min 00:02:23.921 [Pipeline] { 00:02:23.935 [Pipeline] sh 00:02:24.215 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:24.784 HEAD is now at 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:02:24.796 [Pipeline] sh 00:02:25.089 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:25.365 [Pipeline] sh 00:02:25.650 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:25.929 [Pipeline] sh 00:02:26.213 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:26.473 ++ readlink -f spdk_repo 00:02:26.473 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:26.473 + [[ -n /home/vagrant/spdk_repo ]] 00:02:26.473 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:26.473 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:26.473 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:26.473 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:26.473 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:26.473 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:26.473 + cd /home/vagrant/spdk_repo 00:02:26.473 + source /etc/os-release 00:02:26.473 ++ NAME='Fedora Linux' 00:02:26.473 ++ VERSION='39 (Cloud Edition)' 00:02:26.473 ++ ID=fedora 00:02:26.473 ++ VERSION_ID=39 00:02:26.473 ++ VERSION_CODENAME= 00:02:26.473 ++ PLATFORM_ID=platform:f39 00:02:26.473 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:26.473 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:26.473 ++ LOGO=fedora-logo-icon 00:02:26.473 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:26.473 ++ HOME_URL=https://fedoraproject.org/ 00:02:26.473 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:26.473 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:26.473 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:26.473 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:26.473 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:26.473 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:26.473 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:26.473 ++ SUPPORT_END=2024-11-12 00:02:26.473 ++ VARIANT='Cloud Edition' 00:02:26.473 ++ VARIANT_ID=cloud 00:02:26.473 + uname -a 00:02:26.473 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:26.473 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:27.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:27.043 Hugepages 00:02:27.043 node hugesize free / total 00:02:27.043 node0 1048576kB 0 / 0 00:02:27.043 node0 2048kB 0 / 0 00:02:27.043 00:02:27.043 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:27.043 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:27.043 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:27.043 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:27.043 + rm -f /tmp/spdk-ld-path 00:02:27.043 + source autorun-spdk.conf 00:02:27.043 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.043 ++ SPDK_RUN_ASAN=1 00:02:27.043 ++ SPDK_RUN_UBSAN=1 00:02:27.043 ++ SPDK_TEST_RAID=1 00:02:27.043 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:27.043 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:27.043 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:27.043 ++ RUN_NIGHTLY=1 00:02:27.043 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:27.043 + [[ -n '' ]] 00:02:27.043 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:27.043 + for M in /var/spdk/build-*-manifest.txt 00:02:27.043 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:27.043 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.303 + for M in /var/spdk/build-*-manifest.txt 00:02:27.303 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:27.303 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.303 + for M in /var/spdk/build-*-manifest.txt 00:02:27.303 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:27.303 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.303 ++ uname 00:02:27.303 + [[ Linux == \L\i\n\u\x ]] 00:02:27.303 + sudo dmesg -T 00:02:27.303 + sudo dmesg --clear 00:02:27.303 + dmesg_pid=6147 00:02:27.303 + sudo dmesg -Tw 00:02:27.303 + [[ Fedora Linux == FreeBSD ]] 00:02:27.303 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.303 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.303 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:27.303 + [[ -x /usr/src/fio-static/fio ]] 00:02:27.303 + export FIO_BIN=/usr/src/fio-static/fio 00:02:27.303 + FIO_BIN=/usr/src/fio-static/fio 00:02:27.303 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:27.303 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:27.303 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:27.303 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.303 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.303 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:27.303 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.303 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.303 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:27.303 13:16:08 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:27.303 13:16:08 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:27.303 13:16:08 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.303 13:16:08 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:27.303 13:16:08 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:27.303 13:16:08 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:27.303 13:16:08 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:27.303 13:16:08 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:27.303 13:16:08 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:27.303 13:16:08 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:02:27.303 13:16:08 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:27.303 13:16:08 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:27.564 13:16:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:27.564 13:16:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:27.564 13:16:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:27.564 13:16:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:27.564 13:16:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.564 13:16:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.564 13:16:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.564 13:16:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.564 13:16:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.564 13:16:09 -- paths/export.sh@5 -- $ export PATH 00:02:27.564 13:16:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.564 13:16:09 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:27.564 13:16:09 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:27.564 13:16:09 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732108569.XXXXXX 00:02:27.564 13:16:09 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732108569.HIxVSj 00:02:27.564 13:16:09 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:27.564 13:16:09 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:27.564 13:16:09 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:27.564 13:16:09 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:27.564 13:16:09 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:27.564 13:16:09 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:27.564 13:16:09 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:27.564 13:16:09 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:27.564 13:16:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.565 13:16:09 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:27.565 13:16:09 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:27.565 13:16:09 -- pm/common@17 -- $ local monitor 00:02:27.565 13:16:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.565 13:16:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.565 13:16:09 -- pm/common@25 -- $ sleep 1 00:02:27.565 13:16:09 -- pm/common@21 -- $ date +%s 00:02:27.565 13:16:09 -- pm/common@21 -- $ date +%s 00:02:27.565 13:16:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732108569 00:02:27.565 13:16:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732108569 00:02:27.565 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732108569_collect-cpu-load.pm.log 00:02:27.565 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732108569_collect-vmstat.pm.log 00:02:28.508 13:16:10 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:28.508 13:16:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:28.508 13:16:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:28.508 13:16:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:28.508 13:16:10 -- spdk/autobuild.sh@16 -- $ date -u 00:02:28.508 Wed Nov 20 01:16:10 PM UTC 2024 00:02:28.508 13:16:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:28.508 v25.01-pre-219-g557f022f6 00:02:28.508 13:16:10 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:28.508 13:16:10 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:28.508 13:16:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:28.508 13:16:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:28.508 13:16:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.508 ************************************ 00:02:28.508 START TEST asan 00:02:28.508 ************************************ 00:02:28.508 using asan 00:02:28.508 13:16:10 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:28.508 00:02:28.508 real 0m0.001s 00:02:28.508 user 0m0.001s 00:02:28.508 sys 0m0.000s 00:02:28.508 13:16:10 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:28.508 13:16:10 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:28.508 ************************************ 00:02:28.508 END TEST asan 00:02:28.508 ************************************ 00:02:28.770 13:16:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:28.770 13:16:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:28.770 13:16:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:28.770 13:16:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:28.770 13:16:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.770 ************************************ 00:02:28.770 START TEST ubsan 00:02:28.770 ************************************ 00:02:28.770 using ubsan 00:02:28.770 13:16:10 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:28.770 00:02:28.770 real 0m0.000s 00:02:28.770 user 0m0.000s 00:02:28.770 sys 0m0.000s 00:02:28.770 13:16:10 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:28.770 13:16:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:28.770 ************************************ 00:02:28.770 END TEST ubsan 00:02:28.770 ************************************ 00:02:28.770 13:16:10 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:28.770 13:16:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:28.770 13:16:10 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:28.770 13:16:10 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:28.770 13:16:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:28.770 13:16:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.770 ************************************ 00:02:28.770 START TEST build_native_dpdk 00:02:28.770 ************************************ 00:02:28.770 13:16:10 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:28.770 caf0f5d395 version: 22.11.4 00:02:28.770 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:28.770 dc9c799c7d vhost: fix missing spinlock unlock 00:02:28.770 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:28.770 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:28.770 13:16:10 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:28.770 13:16:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:28.771 13:16:10 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:28.771 patching file config/rte_config.h 00:02:28.771 Hunk #1 succeeded at 60 (offset 1 line). 00:02:28.771 13:16:10 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:28.771 13:16:10 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:28.771 patching file lib/pcapng/rte_pcapng.c 00:02:28.771 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:28.771 13:16:10 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:28.771 13:16:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:28.771 13:16:10 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:28.771 13:16:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:28.771 13:16:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:28.771 13:16:10 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:28.771 13:16:10 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:34.057 The Meson build system 00:02:34.057 Version: 1.5.0 00:02:34.057 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:34.057 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:34.057 Build type: native build 00:02:34.057 Program cat found: YES (/usr/bin/cat) 00:02:34.057 Project name: DPDK 00:02:34.057 Project version: 22.11.4 00:02:34.057 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.057 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:34.057 Host machine cpu family: x86_64 00:02:34.057 Host machine cpu: x86_64 00:02:34.057 Message: ## Building in Developer Mode ## 00:02:34.057 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.057 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:34.057 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.057 Program objdump found: YES (/usr/bin/objdump) 00:02:34.057 Program python3 found: YES (/usr/bin/python3) 00:02:34.057 Program cat found: YES (/usr/bin/cat) 00:02:34.057 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:34.057 Checking for size of "void *" : 8 00:02:34.057 Checking for size of "void *" : 8 (cached) 00:02:34.057 Library m found: YES 00:02:34.057 Library numa found: YES 00:02:34.057 Has header "numaif.h" : YES 00:02:34.057 Library fdt found: NO 00:02:34.057 Library execinfo found: NO 00:02:34.057 Has header "execinfo.h" : YES 00:02:34.057 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.057 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.057 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.058 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.058 Run-time dependency openssl found: YES 3.1.1 00:02:34.058 Run-time dependency libpcap found: YES 1.10.4 00:02:34.058 Has header "pcap.h" with dependency libpcap: YES 00:02:34.058 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.058 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.058 Compiler for C supports arguments -Wformat: YES 00:02:34.058 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:34.058 Compiler for C supports arguments -Wformat-security: NO 00:02:34.058 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.058 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.058 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.058 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.058 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.058 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.058 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.058 Compiler for C supports arguments -Wundef: YES 00:02:34.058 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.058 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.058 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.058 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.058 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.058 Compiler for C supports arguments -mavx512f: YES 00:02:34.058 Checking if "AVX512 checking" compiles: YES 00:02:34.058 Fetching value of define "__SSE4_2__" : 1 00:02:34.058 Fetching value of define "__AES__" : 1 00:02:34.058 Fetching value of define "__AVX__" : 1 00:02:34.058 Fetching value of define "__AVX2__" : 1 00:02:34.058 Fetching value of define "__AVX512BW__" : 1 00:02:34.058 Fetching value of define "__AVX512CD__" : 1 00:02:34.058 Fetching value of define "__AVX512DQ__" : 1 00:02:34.058 Fetching value of define "__AVX512F__" : 1 00:02:34.058 Fetching value of define "__AVX512VL__" : 1 00:02:34.058 Fetching value of define "__PCLMUL__" : 1 00:02:34.058 Fetching value of define "__RDRND__" : 1 00:02:34.058 Fetching value of define "__RDSEED__" : 1 00:02:34.058 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.058 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.058 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.058 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.058 Checking for function "getentropy" : YES 00:02:34.058 Message: lib/eal: Defining dependency "eal" 00:02:34.058 Message: lib/ring: Defining dependency "ring" 00:02:34.058 Message: lib/rcu: Defining dependency "rcu" 00:02:34.058 Message: lib/mempool: Defining dependency "mempool" 00:02:34.058 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.058 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:34.058 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:34.058 Compiler for C supports arguments -mpclmul: YES 00:02:34.058 Compiler for C supports arguments -maes: YES 00:02:34.058 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.058 Compiler for C supports arguments -mavx512bw: YES 00:02:34.058 Compiler for C supports arguments -mavx512dq: YES 00:02:34.058 Compiler for C supports arguments -mavx512vl: YES 00:02:34.058 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.058 Compiler for C supports arguments -mavx2: YES 00:02:34.058 Compiler for C supports arguments -mavx: YES 00:02:34.058 Message: lib/net: Defining dependency "net" 00:02:34.058 Message: lib/meter: Defining dependency "meter" 00:02:34.058 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.058 Message: lib/pci: Defining dependency "pci" 00:02:34.058 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.058 Message: lib/metrics: Defining dependency "metrics" 00:02:34.058 Message: lib/hash: Defining dependency "hash" 00:02:34.058 Message: lib/timer: Defining dependency "timer" 00:02:34.058 Fetching value of define "__AVX2__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.058 Message: lib/acl: Defining dependency "acl" 00:02:34.058 Message: lib/bbdev: Defining dependency "bbdev" 00:02:34.058 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:34.058 Run-time dependency libelf found: YES 0.191 00:02:34.058 Message: lib/bpf: Defining dependency "bpf" 00:02:34.058 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:34.058 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.058 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.058 Message: lib/distributor: Defining dependency "distributor" 00:02:34.058 Message: lib/efd: Defining dependency "efd" 00:02:34.058 Message: lib/eventdev: Defining dependency "eventdev" 00:02:34.058 Message: lib/gpudev: Defining dependency "gpudev" 00:02:34.058 Message: lib/gro: Defining dependency "gro" 00:02:34.058 Message: lib/gso: Defining dependency "gso" 00:02:34.058 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:34.058 Message: lib/jobstats: Defining dependency "jobstats" 00:02:34.058 Message: lib/latencystats: Defining dependency "latencystats" 00:02:34.058 Message: lib/lpm: Defining dependency "lpm" 00:02:34.058 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:34.058 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:34.058 Message: lib/member: Defining dependency "member" 00:02:34.058 Message: lib/pcapng: Defining dependency "pcapng" 00:02:34.058 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.058 Message: lib/power: Defining dependency "power" 00:02:34.058 Message: lib/rawdev: Defining dependency "rawdev" 00:02:34.058 Message: lib/regexdev: Defining dependency "regexdev" 00:02:34.058 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.058 Message: lib/rib: Defining dependency "rib" 00:02:34.058 Message: lib/reorder: Defining dependency "reorder" 00:02:34.058 Message: lib/sched: Defining dependency "sched" 00:02:34.058 Message: lib/security: Defining dependency "security" 00:02:34.058 Message: lib/stack: Defining dependency "stack" 00:02:34.058 Has header "linux/userfaultfd.h" : YES 00:02:34.058 Message: lib/vhost: Defining dependency "vhost" 00:02:34.058 Message: lib/ipsec: Defining dependency "ipsec" 00:02:34.058 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.058 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.058 Message: lib/fib: Defining dependency "fib" 00:02:34.058 Message: lib/port: Defining dependency "port" 00:02:34.058 Message: lib/pdump: Defining dependency "pdump" 00:02:34.058 Message: lib/table: Defining dependency "table" 00:02:34.058 Message: lib/pipeline: Defining dependency "pipeline" 00:02:34.058 Message: lib/graph: Defining dependency "graph" 00:02:34.058 Message: lib/node: Defining dependency "node" 00:02:34.058 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.058 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.058 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.058 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.058 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:34.058 Compiler for C supports arguments -Wno-unused-value: YES 00:02:34.058 Compiler for C supports arguments -Wno-format: YES 00:02:34.058 Compiler for C supports arguments -Wno-format-security: YES 00:02:34.058 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:34.058 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:35.440 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:35.440 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:35.440 Fetching value of define "__AVX2__" : 1 (cached) 00:02:35.440 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:35.440 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:35.440 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:35.440 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:35.440 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:35.440 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:35.440 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:35.440 Configuring doxy-api.conf using configuration 00:02:35.440 Program sphinx-build found: NO 00:02:35.440 Configuring rte_build_config.h using configuration 00:02:35.440 Message: 00:02:35.440 ================= 00:02:35.440 Applications Enabled 00:02:35.440 ================= 00:02:35.440 00:02:35.440 apps: 00:02:35.440 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:35.440 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:35.440 test-security-perf, 00:02:35.440 00:02:35.440 Message: 00:02:35.440 ================= 00:02:35.440 Libraries Enabled 00:02:35.440 ================= 00:02:35.440 00:02:35.440 libs: 00:02:35.440 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:35.440 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:35.440 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:35.440 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:35.440 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:35.440 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:35.440 table, pipeline, graph, node, 00:02:35.440 00:02:35.440 Message: 00:02:35.440 =============== 00:02:35.440 Drivers Enabled 00:02:35.440 =============== 00:02:35.440 00:02:35.440 common: 00:02:35.440 00:02:35.440 bus: 00:02:35.440 pci, vdev, 00:02:35.440 mempool: 00:02:35.440 ring, 00:02:35.440 dma: 00:02:35.440 00:02:35.440 net: 00:02:35.440 i40e, 00:02:35.440 raw: 00:02:35.440 00:02:35.440 crypto: 00:02:35.440 00:02:35.440 compress: 00:02:35.440 00:02:35.440 regex: 00:02:35.440 00:02:35.440 vdpa: 00:02:35.440 00:02:35.440 event: 00:02:35.440 00:02:35.440 baseband: 00:02:35.440 00:02:35.440 gpu: 00:02:35.440 00:02:35.440 00:02:35.440 Message: 00:02:35.440 ================= 00:02:35.440 Content Skipped 00:02:35.440 ================= 00:02:35.440 00:02:35.440 apps: 00:02:35.440 00:02:35.440 libs: 00:02:35.440 kni: explicitly disabled via build config (deprecated lib) 00:02:35.440 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:35.440 00:02:35.440 drivers: 00:02:35.440 common/cpt: not in enabled drivers build config 00:02:35.440 common/dpaax: not in enabled drivers build config 00:02:35.440 common/iavf: not in enabled drivers build config 00:02:35.440 common/idpf: not in enabled drivers build config 00:02:35.440 common/mvep: not in enabled drivers build config 00:02:35.440 common/octeontx: not in enabled drivers build config 00:02:35.440 bus/auxiliary: not in enabled drivers build config 00:02:35.440 bus/dpaa: not in enabled drivers build config 00:02:35.440 bus/fslmc: not in enabled drivers build config 00:02:35.440 bus/ifpga: not in enabled drivers build config 00:02:35.440 bus/vmbus: not in enabled drivers build config 00:02:35.440 common/cnxk: not in enabled drivers build config 00:02:35.440 common/mlx5: not in enabled drivers build config 00:02:35.440 common/qat: not in enabled drivers build config 00:02:35.440 common/sfc_efx: not in enabled drivers build config 00:02:35.440 mempool/bucket: not in enabled drivers build config 00:02:35.440 mempool/cnxk: not in enabled drivers build config 00:02:35.440 mempool/dpaa: not in enabled drivers build config 00:02:35.440 mempool/dpaa2: not in enabled drivers build config 00:02:35.440 mempool/octeontx: not in enabled drivers build config 00:02:35.440 mempool/stack: not in enabled drivers build config 00:02:35.440 dma/cnxk: not in enabled drivers build config 00:02:35.440 dma/dpaa: not in enabled drivers build config 00:02:35.440 dma/dpaa2: not in enabled drivers build config 00:02:35.440 dma/hisilicon: not in enabled drivers build config 00:02:35.440 dma/idxd: not in enabled drivers build config 00:02:35.440 dma/ioat: not in enabled drivers build config 00:02:35.440 dma/skeleton: not in enabled drivers build config 00:02:35.440 net/af_packet: not in enabled drivers build config 00:02:35.440 net/af_xdp: not in enabled drivers build config 00:02:35.440 net/ark: not in enabled drivers build config 00:02:35.440 net/atlantic: not in enabled drivers build config 00:02:35.440 net/avp: not in enabled drivers build config 00:02:35.440 net/axgbe: not in enabled drivers build config 00:02:35.440 net/bnx2x: not in enabled drivers build config 00:02:35.440 net/bnxt: not in enabled drivers build config 00:02:35.440 net/bonding: not in enabled drivers build config 00:02:35.440 net/cnxk: not in enabled drivers build config 00:02:35.440 net/cxgbe: not in enabled drivers build config 00:02:35.440 net/dpaa: not in enabled drivers build config 00:02:35.440 net/dpaa2: not in enabled drivers build config 00:02:35.440 net/e1000: not in enabled drivers build config 00:02:35.440 net/ena: not in enabled drivers build config 00:02:35.440 net/enetc: not in enabled drivers build config 00:02:35.440 net/enetfec: not in enabled drivers build config 00:02:35.440 net/enic: not in enabled drivers build config 00:02:35.440 net/failsafe: not in enabled drivers build config 00:02:35.440 net/fm10k: not in enabled drivers build config 00:02:35.440 net/gve: not in enabled drivers build config 00:02:35.440 net/hinic: not in enabled drivers build config 00:02:35.440 net/hns3: not in enabled drivers build config 00:02:35.440 net/iavf: not in enabled drivers build config 00:02:35.440 net/ice: not in enabled drivers build config 00:02:35.440 net/idpf: not in enabled drivers build config 00:02:35.440 net/igc: not in enabled drivers build config 00:02:35.440 net/ionic: not in enabled drivers build config 00:02:35.440 net/ipn3ke: not in enabled drivers build config 00:02:35.440 net/ixgbe: not in enabled drivers build config 00:02:35.440 net/kni: not in enabled drivers build config 00:02:35.440 net/liquidio: not in enabled drivers build config 00:02:35.440 net/mana: not in enabled drivers build config 00:02:35.440 net/memif: not in enabled drivers build config 00:02:35.440 net/mlx4: not in enabled drivers build config 00:02:35.440 net/mlx5: not in enabled drivers build config 00:02:35.440 net/mvneta: not in enabled drivers build config 00:02:35.440 net/mvpp2: not in enabled drivers build config 00:02:35.440 net/netvsc: not in enabled drivers build config 00:02:35.440 net/nfb: not in enabled drivers build config 00:02:35.440 net/nfp: not in enabled drivers build config 00:02:35.440 net/ngbe: not in enabled drivers build config 00:02:35.440 net/null: not in enabled drivers build config 00:02:35.440 net/octeontx: not in enabled drivers build config 00:02:35.440 net/octeon_ep: not in enabled drivers build config 00:02:35.440 net/pcap: not in enabled drivers build config 00:02:35.440 net/pfe: not in enabled drivers build config 00:02:35.440 net/qede: not in enabled drivers build config 00:02:35.440 net/ring: not in enabled drivers build config 00:02:35.440 net/sfc: not in enabled drivers build config 00:02:35.441 net/softnic: not in enabled drivers build config 00:02:35.441 net/tap: not in enabled drivers build config 00:02:35.441 net/thunderx: not in enabled drivers build config 00:02:35.441 net/txgbe: not in enabled drivers build config 00:02:35.441 net/vdev_netvsc: not in enabled drivers build config 00:02:35.441 net/vhost: not in enabled drivers build config 00:02:35.441 net/virtio: not in enabled drivers build config 00:02:35.441 net/vmxnet3: not in enabled drivers build config 00:02:35.441 raw/cnxk_bphy: not in enabled drivers build config 00:02:35.441 raw/cnxk_gpio: not in enabled drivers build config 00:02:35.441 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:35.441 raw/ifpga: not in enabled drivers build config 00:02:35.441 raw/ntb: not in enabled drivers build config 00:02:35.441 raw/skeleton: not in enabled drivers build config 00:02:35.441 crypto/armv8: not in enabled drivers build config 00:02:35.441 crypto/bcmfs: not in enabled drivers build config 00:02:35.441 crypto/caam_jr: not in enabled drivers build config 00:02:35.441 crypto/ccp: not in enabled drivers build config 00:02:35.441 crypto/cnxk: not in enabled drivers build config 00:02:35.441 crypto/dpaa_sec: not in enabled drivers build config 00:02:35.441 crypto/dpaa2_sec: not in enabled drivers build config 00:02:35.441 crypto/ipsec_mb: not in enabled drivers build config 00:02:35.441 crypto/mlx5: not in enabled drivers build config 00:02:35.441 crypto/mvsam: not in enabled drivers build config 00:02:35.441 crypto/nitrox: not in enabled drivers build config 00:02:35.441 crypto/null: not in enabled drivers build config 00:02:35.441 crypto/octeontx: not in enabled drivers build config 00:02:35.441 crypto/openssl: not in enabled drivers build config 00:02:35.441 crypto/scheduler: not in enabled drivers build config 00:02:35.441 crypto/uadk: not in enabled drivers build config 00:02:35.441 crypto/virtio: not in enabled drivers build config 00:02:35.441 compress/isal: not in enabled drivers build config 00:02:35.441 compress/mlx5: not in enabled drivers build config 00:02:35.441 compress/octeontx: not in enabled drivers build config 00:02:35.441 compress/zlib: not in enabled drivers build config 00:02:35.441 regex/mlx5: not in enabled drivers build config 00:02:35.441 regex/cn9k: not in enabled drivers build config 00:02:35.441 vdpa/ifc: not in enabled drivers build config 00:02:35.441 vdpa/mlx5: not in enabled drivers build config 00:02:35.441 vdpa/sfc: not in enabled drivers build config 00:02:35.441 event/cnxk: not in enabled drivers build config 00:02:35.441 event/dlb2: not in enabled drivers build config 00:02:35.441 event/dpaa: not in enabled drivers build config 00:02:35.441 event/dpaa2: not in enabled drivers build config 00:02:35.441 event/dsw: not in enabled drivers build config 00:02:35.441 event/opdl: not in enabled drivers build config 00:02:35.441 event/skeleton: not in enabled drivers build config 00:02:35.441 event/sw: not in enabled drivers build config 00:02:35.441 event/octeontx: not in enabled drivers build config 00:02:35.441 baseband/acc: not in enabled drivers build config 00:02:35.441 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:35.441 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:35.441 baseband/la12xx: not in enabled drivers build config 00:02:35.441 baseband/null: not in enabled drivers build config 00:02:35.441 baseband/turbo_sw: not in enabled drivers build config 00:02:35.441 gpu/cuda: not in enabled drivers build config 00:02:35.441 00:02:35.441 00:02:35.441 Build targets in project: 311 00:02:35.441 00:02:35.441 DPDK 22.11.4 00:02:35.441 00:02:35.441 User defined options 00:02:35.441 libdir : lib 00:02:35.441 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:35.441 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:35.441 c_link_args : 00:02:35.441 enable_docs : false 00:02:35.441 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:35.441 enable_kmods : false 00:02:35.441 machine : native 00:02:35.441 tests : false 00:02:35.441 00:02:35.441 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:35.441 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:35.700 13:16:17 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:35.700 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:35.700 [1/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:35.700 [2/740] Generating lib/rte_kvargs_def with a custom command 00:02:35.700 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:35.700 [4/740] Generating lib/rte_telemetry_def with a custom command 00:02:35.700 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:35.700 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:35.700 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:35.700 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:35.700 [9/740] Linking static target lib/librte_kvargs.a 00:02:35.970 [10/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:35.970 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.970 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.970 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:35.970 [14/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.970 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.970 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.970 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.970 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.970 [19/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.970 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.970 [21/740] Linking target lib/librte_kvargs.so.23.0 00:02:35.970 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.970 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:36.229 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:36.229 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:36.229 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.229 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:36.229 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:36.229 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:36.229 [30/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:36.229 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:36.229 [32/740] Linking static target lib/librte_telemetry.a 00:02:36.229 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.229 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.229 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:36.229 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.229 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.487 [38/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:36.487 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.487 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.487 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.487 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:36.487 [43/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.487 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:36.487 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:36.487 [46/740] Linking target lib/librte_telemetry.so.23.0 00:02:36.746 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:36.746 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:36.746 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:36.746 [50/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:36.746 [51/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:36.746 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:36.746 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:36.746 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:36.746 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:36.746 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:36.746 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:36.746 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:36.746 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:36.746 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:36.746 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:36.746 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:36.746 [63/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:36.746 [64/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:36.746 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:36.746 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:37.005 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:37.005 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:37.005 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:37.005 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:37.005 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:37.005 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:37.005 [73/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:37.005 [74/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.005 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:37.005 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:37.005 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:37.005 [78/740] Generating lib/rte_eal_def with a custom command 00:02:37.005 [79/740] Generating lib/rte_eal_mingw with a custom command 00:02:37.005 [80/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:37.005 [81/740] Generating lib/rte_ring_def with a custom command 00:02:37.005 [82/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:37.005 [83/740] Generating lib/rte_ring_mingw with a custom command 00:02:37.005 [84/740] Generating lib/rte_rcu_def with a custom command 00:02:37.005 [85/740] Generating lib/rte_rcu_mingw with a custom command 00:02:37.005 [86/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:37.264 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:37.264 [88/740] Linking static target lib/librte_ring.a 00:02:37.264 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:37.264 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:37.264 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:02:37.264 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:37.264 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:37.264 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.523 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:37.523 [96/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:37.523 [97/740] Generating lib/rte_mbuf_def with a custom command 00:02:37.523 [98/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:37.523 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:37.523 [100/740] Linking static target lib/librte_eal.a 00:02:37.523 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:37.523 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:37.523 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:37.781 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:37.781 [105/740] Linking static target lib/librte_rcu.a 00:02:37.781 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:37.781 [107/740] Linking static target lib/librte_mempool.a 00:02:37.781 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:38.040 [109/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:38.040 [110/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:38.040 [111/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:38.040 [112/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:38.040 [113/740] Generating lib/rte_net_def with a custom command 00:02:38.040 [114/740] Generating lib/rte_net_mingw with a custom command 00:02:38.040 [115/740] Generating lib/rte_meter_def with a custom command 00:02:38.040 [116/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.040 [117/740] Generating lib/rte_meter_mingw with a custom command 00:02:38.040 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.040 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:38.040 [120/740] Linking static target lib/librte_meter.a 00:02:38.040 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:38.299 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.299 [123/740] Linking static target lib/librte_net.a 00:02:38.299 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.299 [125/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:38.299 [126/740] Linking static target lib/librte_mbuf.a 00:02:38.299 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:38.299 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:38.299 [129/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.558 [130/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.558 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:38.558 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:38.558 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:38.816 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:38.816 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.816 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.816 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:38.816 [138/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:38.816 [139/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:39.075 [140/740] Generating lib/rte_pci_def with a custom command 00:02:39.075 [141/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:39.075 [142/740] Generating lib/rte_pci_mingw with a custom command 00:02:39.075 [143/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:39.075 [144/740] Linking static target lib/librte_pci.a 00:02:39.075 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:39.075 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:39.075 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:39.075 [148/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:39.075 [149/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.075 [150/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.075 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:39.335 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.335 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:39.335 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.335 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:39.335 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.335 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:39.335 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:39.335 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.335 [160/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.335 [161/740] Generating lib/rte_metrics_def with a custom command 00:02:39.335 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:02:39.335 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.335 [164/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.335 [165/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:39.335 [166/740] Generating lib/rte_hash_def with a custom command 00:02:39.594 [167/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.594 [168/740] Generating lib/rte_hash_mingw with a custom command 00:02:39.594 [169/740] Linking static target lib/librte_cmdline.a 00:02:39.594 [170/740] Generating lib/rte_timer_def with a custom command 00:02:39.594 [171/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:39.594 [172/740] Generating lib/rte_timer_mingw with a custom command 00:02:39.594 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:39.594 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:39.594 [175/740] Linking static target lib/librte_metrics.a 00:02:39.852 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:39.852 [177/740] Linking static target lib/librte_timer.a 00:02:40.111 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.111 [179/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:40.111 [180/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:40.111 [181/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.111 [182/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:40.111 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.369 [184/740] Generating lib/rte_acl_def with a custom command 00:02:40.369 [185/740] Generating lib/rte_acl_mingw with a custom command 00:02:40.369 [186/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:40.369 [187/740] Generating lib/rte_bbdev_def with a custom command 00:02:40.369 [188/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:40.369 [189/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:40.369 [190/740] Generating lib/rte_bitratestats_def with a custom command 00:02:40.369 [191/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:40.369 [192/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:40.369 [193/740] Linking static target lib/librte_ethdev.a 00:02:40.937 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:40.937 [195/740] Linking static target lib/librte_bitratestats.a 00:02:40.937 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:40.937 [197/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:40.937 [198/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.937 [199/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:40.937 [200/740] Linking static target lib/librte_bbdev.a 00:02:41.197 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:41.456 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:41.456 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:41.456 [204/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.715 [205/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:41.715 [206/740] Linking static target lib/librte_hash.a 00:02:41.715 [207/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:41.715 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:41.974 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:41.974 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:41.974 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:02:41.974 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:42.233 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:42.233 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:42.233 [215/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.233 [216/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:42.233 [217/740] Linking static target lib/librte_cfgfile.a 00:02:42.233 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:42.233 [219/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:42.233 [220/740] Generating lib/rte_compressdev_def with a custom command 00:02:42.233 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:42.233 [222/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:42.492 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:42.492 [224/740] Linking static target lib/librte_bpf.a 00:02:42.492 [225/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.492 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:42.492 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:02:42.492 [228/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:42.750 [229/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:42.750 [230/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:42.750 [231/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.750 [232/740] Generating lib/rte_distributor_def with a custom command 00:02:42.750 [233/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:42.750 [234/740] Generating lib/rte_distributor_mingw with a custom command 00:02:42.750 [235/740] Linking static target lib/librte_compressdev.a 00:02:42.750 [236/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:42.750 [237/740] Linking static target lib/librte_acl.a 00:02:42.750 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:42.750 [239/740] Generating lib/rte_efd_def with a custom command 00:02:42.750 [240/740] Generating lib/rte_efd_mingw with a custom command 00:02:43.009 [241/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.009 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:43.009 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:43.266 [244/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.266 [245/740] Linking target lib/librte_eal.so.23.0 00:02:43.266 [246/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:43.266 [247/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:43.266 [248/740] Linking static target lib/librte_distributor.a 00:02:43.266 [249/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:43.266 [250/740] Linking target lib/librte_ring.so.23.0 00:02:43.524 [251/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.524 [252/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:43.524 [253/740] Linking target lib/librte_meter.so.23.0 00:02:43.524 [254/740] Linking target lib/librte_pci.so.23.0 00:02:43.524 [255/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:43.524 [256/740] Linking target lib/librte_rcu.so.23.0 00:02:43.524 [257/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.524 [258/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:43.524 [259/740] Linking target lib/librte_mempool.so.23.0 00:02:43.524 [260/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:43.524 [261/740] Linking target lib/librte_timer.so.23.0 00:02:43.524 [262/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:43.524 [263/740] Linking target lib/librte_acl.so.23.0 00:02:43.782 [264/740] Linking target lib/librte_cfgfile.so.23.0 00:02:43.782 [265/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:43.782 [266/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:43.782 [267/740] Linking target lib/librte_mbuf.so.23.0 00:02:43.782 [268/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:43.782 [269/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:43.782 [270/740] Linking target lib/librte_net.so.23.0 00:02:44.041 [271/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:44.041 [272/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:44.041 [273/740] Linking target lib/librte_bbdev.so.23.0 00:02:44.041 [274/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:44.041 [275/740] Linking target lib/librte_cmdline.so.23.0 00:02:44.041 [276/740] Linking target lib/librte_hash.so.23.0 00:02:44.041 [277/740] Linking target lib/librte_compressdev.so.23.0 00:02:44.041 [278/740] Linking static target lib/librte_efd.a 00:02:44.041 [279/740] Linking target lib/librte_distributor.so.23.0 00:02:44.041 [280/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:44.041 [281/740] Generating lib/rte_eventdev_def with a custom command 00:02:44.041 [282/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:44.041 [283/740] Generating lib/rte_gpudev_def with a custom command 00:02:44.041 [284/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:44.041 [285/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:44.300 [286/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.300 [287/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.300 [288/740] Linking target lib/librte_ethdev.so.23.0 00:02:44.300 [289/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:44.300 [290/740] Linking target lib/librte_efd.so.23.0 00:02:44.300 [291/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:44.300 [292/740] Linking static target lib/librte_cryptodev.a 00:02:44.300 [293/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:44.300 [294/740] Linking target lib/librte_metrics.so.23.0 00:02:44.558 [295/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:44.558 [296/740] Linking target lib/librte_bitratestats.so.23.0 00:02:44.558 [297/740] Linking target lib/librte_bpf.so.23.0 00:02:44.558 [298/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:44.558 [299/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:44.558 [300/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:44.558 [301/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:44.558 [302/740] Linking static target lib/librte_gpudev.a 00:02:44.558 [303/740] Generating lib/rte_gro_def with a custom command 00:02:44.558 [304/740] Generating lib/rte_gro_mingw with a custom command 00:02:44.558 [305/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:44.816 [306/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:44.816 [307/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:44.816 [308/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:45.073 [309/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:45.073 [310/740] Generating lib/rte_gso_def with a custom command 00:02:45.073 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:45.073 [312/740] Generating lib/rte_gso_mingw with a custom command 00:02:45.074 [313/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:45.074 [314/740] Linking static target lib/librte_gro.a 00:02:45.074 [315/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:45.074 [316/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:45.331 [317/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:45.331 [318/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:45.331 [319/740] Linking static target lib/librte_gso.a 00:02:45.331 [320/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:45.331 [321/740] Linking static target lib/librte_eventdev.a 00:02:45.331 [322/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.331 [323/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.331 [324/740] Linking target lib/librte_gro.so.23.0 00:02:45.331 [325/740] Linking target lib/librte_gpudev.so.23.0 00:02:45.331 [326/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.331 [327/740] Generating lib/rte_ip_frag_def with a custom command 00:02:45.331 [328/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:45.331 [329/740] Linking target lib/librte_gso.so.23.0 00:02:45.590 [330/740] Generating lib/rte_jobstats_def with a custom command 00:02:45.590 [331/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:45.590 [332/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:45.590 [333/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:45.590 [334/740] Generating lib/rte_latencystats_def with a custom command 00:02:45.590 [335/740] Linking static target lib/librte_jobstats.a 00:02:45.590 [336/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:45.590 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:45.590 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:45.590 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:45.590 [340/740] Generating lib/rte_lpm_mingw with a custom command 00:02:45.590 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:45.847 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:45.847 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.847 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:45.847 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:45.847 [346/740] Linking static target lib/librte_ip_frag.a 00:02:45.847 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:45.847 [348/740] Linking static target lib/librte_latencystats.a 00:02:46.106 [349/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.106 [350/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:46.106 [351/740] Linking target lib/librte_cryptodev.so.23.0 00:02:46.106 [352/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:46.106 [353/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:46.106 [354/740] Generating lib/rte_member_def with a custom command 00:02:46.106 [355/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:46.106 [356/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.106 [357/740] Generating lib/rte_member_mingw with a custom command 00:02:46.106 [358/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.106 [359/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:46.106 [360/740] Linking target lib/librte_latencystats.so.23.0 00:02:46.106 [361/740] Generating lib/rte_pcapng_def with a custom command 00:02:46.106 [362/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:46.106 [363/740] Linking target lib/librte_ip_frag.so.23.0 00:02:46.365 [364/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:46.365 [365/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:46.365 [366/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:46.365 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:46.365 [368/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:46.365 [369/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:46.623 [370/740] Linking static target lib/librte_lpm.a 00:02:46.623 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:46.623 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:46.623 [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:46.623 [374/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:46.623 [375/740] Generating lib/rte_power_def with a custom command 00:02:46.623 [376/740] Generating lib/rte_power_mingw with a custom command 00:02:46.623 [377/740] Generating lib/rte_rawdev_def with a custom command 00:02:46.623 [378/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:46.881 [379/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.881 [380/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.881 [381/740] Linking target lib/librte_lpm.so.23.0 00:02:46.881 [382/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:46.881 [383/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:46.881 [384/740] Linking static target lib/librte_pcapng.a 00:02:46.881 [385/740] Generating lib/rte_regexdev_def with a custom command 00:02:46.881 [386/740] Linking target lib/librte_eventdev.so.23.0 00:02:46.881 [387/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:46.881 [388/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:46.881 [389/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:46.881 [390/740] Generating lib/rte_dmadev_def with a custom command 00:02:46.881 [391/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:46.881 [392/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:46.881 [393/740] Linking static target lib/librte_rawdev.a 00:02:46.881 [394/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:47.141 [395/740] Generating lib/rte_rib_def with a custom command 00:02:47.141 [396/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:47.141 [397/740] Generating lib/rte_rib_mingw with a custom command 00:02:47.141 [398/740] Generating lib/rte_reorder_def with a custom command 00:02:47.141 [399/740] Generating lib/rte_reorder_mingw with a custom command 00:02:47.141 [400/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.141 [401/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:47.141 [402/740] Linking static target lib/librte_power.a 00:02:47.141 [403/740] Linking target lib/librte_pcapng.so.23.0 00:02:47.141 [404/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:47.141 [405/740] Linking static target lib/librte_regexdev.a 00:02:47.141 [406/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:47.141 [407/740] Linking static target lib/librte_dmadev.a 00:02:47.400 [408/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:47.400 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:47.400 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.400 [411/740] Linking target lib/librte_rawdev.so.23.0 00:02:47.400 [412/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:47.400 [413/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:47.400 [414/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:47.400 [415/740] Generating lib/rte_sched_def with a custom command 00:02:47.400 [416/740] Generating lib/rte_sched_mingw with a custom command 00:02:47.669 [417/740] Generating lib/rte_security_def with a custom command 00:02:47.669 [418/740] Generating lib/rte_security_mingw with a custom command 00:02:47.669 [419/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:47.669 [420/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:47.669 [421/740] Linking static target lib/librte_reorder.a 00:02:47.669 [422/740] Linking static target lib/librte_member.a 00:02:47.669 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:47.669 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:47.669 [425/740] Generating lib/rte_stack_def with a custom command 00:02:47.669 [426/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.669 [427/740] Generating lib/rte_stack_mingw with a custom command 00:02:47.669 [428/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:47.669 [429/740] Linking static target lib/librte_stack.a 00:02:47.669 [430/740] Linking target lib/librte_dmadev.so.23.0 00:02:47.669 [431/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:47.670 [432/740] Linking static target lib/librte_rib.a 00:02:47.670 [433/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.940 [434/740] Linking target lib/librte_reorder.so.23.0 00:02:47.940 [435/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:47.940 [436/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:47.940 [437/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.940 [438/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.940 [439/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.940 [440/740] Linking target lib/librte_member.so.23.0 00:02:47.940 [441/740] Linking target lib/librte_regexdev.so.23.0 00:02:47.940 [442/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.940 [443/740] Linking target lib/librte_stack.so.23.0 00:02:47.940 [444/740] Linking target lib/librte_power.so.23.0 00:02:47.940 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:47.940 [446/740] Linking static target lib/librte_security.a 00:02:48.198 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.198 [448/740] Linking target lib/librte_rib.so.23.0 00:02:48.198 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:48.198 [450/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:48.198 [451/740] Generating lib/rte_vhost_def with a custom command 00:02:48.198 [452/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:48.198 [453/740] Generating lib/rte_vhost_mingw with a custom command 00:02:48.455 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.455 [455/740] Linking target lib/librte_security.so.23.0 00:02:48.455 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:48.455 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:48.714 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:48.714 [459/740] Linking static target lib/librte_sched.a 00:02:48.714 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:48.714 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:48.972 [462/740] Generating lib/rte_ipsec_def with a custom command 00:02:48.972 [463/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:48.972 [464/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:48.972 [465/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:48.972 [466/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.972 [467/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:48.972 [468/740] Linking target lib/librte_sched.so.23.0 00:02:49.229 [469/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:49.229 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:49.229 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:49.229 [472/740] Generating lib/rte_fib_def with a custom command 00:02:49.229 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:49.229 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:49.487 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:49.746 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:49.746 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:49.746 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:49.746 [479/740] Linking static target lib/librte_ipsec.a 00:02:49.746 [480/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:50.006 [481/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:50.006 [482/740] Linking static target lib/librte_fib.a 00:02:50.006 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:50.006 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:50.006 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.006 [486/740] Linking target lib/librte_ipsec.so.23.0 00:02:50.266 [487/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.266 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:50.266 [489/740] Linking target lib/librte_fib.so.23.0 00:02:50.266 [490/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:50.266 [491/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:50.832 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:50.832 [493/740] Generating lib/rte_port_def with a custom command 00:02:50.832 [494/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:50.832 [495/740] Generating lib/rte_port_mingw with a custom command 00:02:50.832 [496/740] Generating lib/rte_pdump_def with a custom command 00:02:50.832 [497/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:50.832 [498/740] Generating lib/rte_pdump_mingw with a custom command 00:02:50.832 [499/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:50.832 [500/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:50.832 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:50.832 [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:51.091 [503/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:51.091 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:51.091 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:51.350 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:51.350 [507/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:51.350 [508/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:51.350 [509/740] Linking static target lib/librte_port.a 00:02:51.350 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:51.610 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:51.610 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:51.610 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:51.610 [514/740] Linking static target lib/librte_pdump.a 00:02:51.869 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.869 [516/740] Linking target lib/librte_port.so.23.0 00:02:51.869 [517/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.869 [518/740] Linking target lib/librte_pdump.so.23.0 00:02:51.869 [519/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:51.869 [520/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:51.869 [521/740] Generating lib/rte_table_def with a custom command 00:02:51.869 [522/740] Generating lib/rte_table_mingw with a custom command 00:02:52.127 [523/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:52.127 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:52.127 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:52.385 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:52.385 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:52.385 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:52.385 [529/740] Generating lib/rte_pipeline_def with a custom command 00:02:52.385 [530/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:52.385 [531/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:52.385 [532/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:52.385 [533/740] Linking static target lib/librte_table.a 00:02:52.642 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:52.900 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:52.900 [536/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:52.900 [537/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.159 [538/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:53.159 [539/740] Linking target lib/librte_table.so.23.0 00:02:53.159 [540/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:53.159 [541/740] Generating lib/rte_graph_def with a custom command 00:02:53.159 [542/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:53.159 [543/740] Generating lib/rte_graph_mingw with a custom command 00:02:53.418 [544/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:53.418 [545/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:53.418 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:53.418 [547/740] Linking static target lib/librte_graph.a 00:02:53.418 [548/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:53.677 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:53.677 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:53.677 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:53.677 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:53.936 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:53.936 [554/740] Generating lib/rte_node_def with a custom command 00:02:53.936 [555/740] Generating lib/rte_node_mingw with a custom command 00:02:53.936 [556/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.936 [557/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:53.936 [558/740] Linking target lib/librte_graph.so.23.0 00:02:54.194 [559/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:54.194 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:54.194 [561/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:54.194 [562/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.194 [563/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:54.194 [564/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.194 [565/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:54.194 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:54.452 [567/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:54.452 [568/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.452 [569/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:54.452 [570/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:54.452 [571/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:54.452 [572/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:54.452 [573/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:54.452 [574/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:54.452 [575/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:54.452 [576/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.452 [577/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:54.452 [578/740] Linking static target lib/librte_node.a 00:02:54.452 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:54.452 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:54.710 [581/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:54.710 [582/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.710 [583/740] Linking static target drivers/librte_bus_vdev.a 00:02:54.710 [584/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.710 [585/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:54.710 [586/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.710 [587/740] Linking static target drivers/librte_bus_pci.a 00:02:54.710 [588/740] Linking target lib/librte_node.so.23.0 00:02:54.710 [589/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.710 [590/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.969 [591/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.969 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:54.969 [593/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:54.969 [594/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.969 [595/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:55.228 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:55.228 [597/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:55.228 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:55.228 [599/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:55.228 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:55.228 [601/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:55.486 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:55.486 [603/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.486 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.486 [605/740] Linking static target drivers/librte_mempool_ring.a 00:02:55.486 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.486 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:55.744 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:56.003 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:56.003 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:56.003 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:56.567 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:56.567 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:56.567 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:56.824 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:56.824 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:57.081 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:57.081 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:57.081 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:57.081 [620/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:57.081 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:58.029 [622/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:58.029 [623/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:58.029 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:58.029 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:58.287 [626/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:58.287 [627/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:58.287 [628/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:58.545 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:58.545 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:58.545 [631/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:58.545 [632/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:58.545 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:59.111 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:59.112 [635/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:59.112 [636/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:59.112 [637/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:59.112 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:59.370 [639/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:59.370 [640/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:59.370 [641/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:59.370 [642/740] Linking static target drivers/librte_net_i40e.a 00:02:59.370 [643/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:59.370 [644/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:59.629 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:59.629 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:59.629 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:59.629 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:59.893 [649/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.893 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:59.893 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:59.893 [652/740] Linking target drivers/librte_net_i40e.so.23.0 00:03:00.162 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:00.162 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:00.420 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:00.420 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:00.420 [657/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:00.420 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:00.420 [659/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:00.420 [660/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:00.679 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:00.679 [662/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:00.679 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:00.679 [664/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:00.679 [665/740] Linking static target lib/librte_vhost.a 00:03:00.937 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:00.937 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:01.196 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:01.455 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:01.455 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:01.715 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:01.715 [672/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.715 [673/740] Linking target lib/librte_vhost.so.23.0 00:03:01.715 [674/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:01.973 [675/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:01.973 [676/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:01.973 [677/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:01.973 [678/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:01.973 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:02.233 [680/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:02.233 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:02.233 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:02.233 [683/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:02.233 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:02.492 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:02.492 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:02.492 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:02.492 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:02.492 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:02.752 [690/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:02.752 [691/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:02.752 [692/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:03.012 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:03.012 [694/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:03.012 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:03.272 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:03.531 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:03.531 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:03.531 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:03.531 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:03.790 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:04.049 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:04.049 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:04.049 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:04.049 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:04.307 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:04.307 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:04.565 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:04.823 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:04.823 [710/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:04.823 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:05.081 [712/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:05.081 [713/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:05.081 [714/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:05.081 [715/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:05.341 [716/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:05.341 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:05.601 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:05.861 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:06.431 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:06.431 [721/740] Linking static target lib/librte_pipeline.a 00:03:07.001 [722/740] Linking target app/dpdk-test-acl 00:03:07.001 [723/740] Linking target app/dpdk-pdump 00:03:07.001 [724/740] Linking target app/dpdk-test-cmdline 00:03:07.001 [725/740] Linking target app/dpdk-test-crypto-perf 00:03:07.001 [726/740] Linking target app/dpdk-test-eventdev 00:03:07.001 [727/740] Linking target app/dpdk-dumpcap 00:03:07.001 [728/740] Linking target app/dpdk-proc-info 00:03:07.001 [729/740] Linking target app/dpdk-test-bbdev 00:03:07.001 [730/740] Linking target app/dpdk-test-compress-perf 00:03:07.262 [731/740] Linking target app/dpdk-test-fib 00:03:07.262 [732/740] Linking target app/dpdk-test-gpudev 00:03:07.262 [733/740] Linking target app/dpdk-test-security-perf 00:03:07.262 [734/740] Linking target app/dpdk-test-flow-perf 00:03:07.262 [735/740] Linking target app/dpdk-test-sad 00:03:07.262 [736/740] Linking target app/dpdk-test-pipeline 00:03:07.262 [737/740] Linking target app/dpdk-testpmd 00:03:07.262 [738/740] Linking target app/dpdk-test-regex 00:03:12.547 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.547 [740/740] Linking target lib/librte_pipeline.so.23.0 00:03:12.547 13:16:53 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:12.547 13:16:53 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:12.547 13:16:53 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:12.547 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:12.547 [0/1] Installing files. 00:03:12.547 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.549 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.550 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.551 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.552 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.552 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.552 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.552 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.552 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.552 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:12.552 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:12.552 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:12.552 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:12.552 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.552 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.812 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.812 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.812 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.812 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.813 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.813 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.813 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.813 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.813 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.813 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.813 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.813 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.813 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.813 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.813 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.813 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.813 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.076 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.076 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.076 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.076 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.076 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.076 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.076 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.076 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.076 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.076 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.076 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.076 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.076 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.076 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.076 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.076 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.076 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.076 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.076 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.077 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.078 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:13.079 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:13.079 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:13.079 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:13.079 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:13.079 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:13.079 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:13.079 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:13.079 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:13.079 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:13.079 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:13.079 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:13.079 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:13.079 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:13.079 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:13.079 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:13.079 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:13.079 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:13.079 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:13.079 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:13.079 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:13.079 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:13.079 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:13.080 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:13.080 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:13.080 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:13.080 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:13.080 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:13.080 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:13.080 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:13.080 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:13.080 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:13.080 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:13.080 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:13.080 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:13.080 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:13.080 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:13.080 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:13.080 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:13.080 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:13.080 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:13.080 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:13.080 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:13.080 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:13.080 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:13.080 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:13.080 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:13.080 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:13.080 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:13.080 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:13.080 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:13.080 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:13.080 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:13.080 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:13.080 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:13.080 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:13.080 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:13.080 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:13.080 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:13.080 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:13.080 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:13.080 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:13.080 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:13.080 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:13.080 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:13.080 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:13.080 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:13.080 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:13.080 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:13.080 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:13.080 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:13.080 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:13.080 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:13.080 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:13.080 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:13.080 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:13.080 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:13.080 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:13.080 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:13.080 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:13.080 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:13.080 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:13.080 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:13.080 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:13.080 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:13.080 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:13.080 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:13.080 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:13.080 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:13.080 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:13.080 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:13.080 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:13.080 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:13.080 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:13.080 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:13.080 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:13.080 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:13.080 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:13.080 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:13.080 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:13.080 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:13.080 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:13.080 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:13.080 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:13.080 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:13.080 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:13.080 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:13.080 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:13.080 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:13.080 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:13.080 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:13.080 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:13.080 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:13.080 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:13.080 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:13.080 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:13.080 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:13.080 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:13.080 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:13.080 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:13.080 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:13.080 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:13.080 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:13.080 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:13.080 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:13.080 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:13.080 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:13.080 13:16:54 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:13.080 13:16:54 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:13.080 00:03:13.080 real 0m44.375s 00:03:13.080 user 4m18.880s 00:03:13.080 sys 0m49.390s 00:03:13.080 13:16:54 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:13.081 13:16:54 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:13.081 ************************************ 00:03:13.081 END TEST build_native_dpdk 00:03:13.081 ************************************ 00:03:13.081 13:16:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:13.081 13:16:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:13.081 13:16:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:13.081 13:16:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:13.081 13:16:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:13.081 13:16:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:13.081 13:16:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:13.081 13:16:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:13.341 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:13.600 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.600 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:13.600 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:14.169 Using 'verbs' RDMA provider 00:03:30.012 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:48.191 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:48.191 Creating mk/config.mk...done. 00:03:48.191 Creating mk/cc.flags.mk...done. 00:03:48.191 Type 'make' to build. 00:03:48.191 13:17:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:48.191 13:17:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:48.191 13:17:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:48.191 13:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:03:48.191 ************************************ 00:03:48.191 START TEST make 00:03:48.191 ************************************ 00:03:48.191 13:17:28 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:48.191 make[1]: Nothing to be done for 'all'. 00:04:26.915 CC lib/ut/ut.o 00:04:26.915 CC lib/log/log.o 00:04:26.915 CC lib/log/log_deprecated.o 00:04:26.915 CC lib/log/log_flags.o 00:04:26.915 CC lib/ut_mock/mock.o 00:04:26.915 LIB libspdk_log.a 00:04:26.915 LIB libspdk_ut.a 00:04:26.915 LIB libspdk_ut_mock.a 00:04:26.915 SO libspdk_log.so.7.1 00:04:27.176 SO libspdk_ut.so.2.0 00:04:27.176 SO libspdk_ut_mock.so.6.0 00:04:27.176 SYMLINK libspdk_log.so 00:04:27.176 SYMLINK libspdk_ut.so 00:04:27.176 SYMLINK libspdk_ut_mock.so 00:04:27.435 CC lib/dma/dma.o 00:04:27.435 CC lib/util/base64.o 00:04:27.435 CXX lib/trace_parser/trace.o 00:04:27.435 CC lib/util/bit_array.o 00:04:27.435 CC lib/util/cpuset.o 00:04:27.435 CC lib/util/crc32.o 00:04:27.435 CC lib/util/crc16.o 00:04:27.435 CC lib/util/crc32c.o 00:04:27.435 CC lib/ioat/ioat.o 00:04:27.435 CC lib/util/crc32_ieee.o 00:04:27.435 CC lib/vfio_user/host/vfio_user_pci.o 00:04:27.435 CC lib/util/crc64.o 00:04:27.435 CC lib/util/dif.o 00:04:27.435 CC lib/vfio_user/host/vfio_user.o 00:04:27.435 LIB libspdk_dma.a 00:04:27.435 CC lib/util/fd.o 00:04:27.693 SO libspdk_dma.so.5.0 00:04:27.693 CC lib/util/fd_group.o 00:04:27.693 CC lib/util/file.o 00:04:27.693 CC lib/util/hexlify.o 00:04:27.693 SYMLINK libspdk_dma.so 00:04:27.693 CC lib/util/iov.o 00:04:27.693 LIB libspdk_ioat.a 00:04:27.693 SO libspdk_ioat.so.7.0 00:04:27.693 CC lib/util/math.o 00:04:27.693 SYMLINK libspdk_ioat.so 00:04:27.693 CC lib/util/net.o 00:04:27.693 LIB libspdk_vfio_user.a 00:04:27.693 CC lib/util/pipe.o 00:04:27.693 CC lib/util/strerror_tls.o 00:04:27.693 CC lib/util/string.o 00:04:27.693 SO libspdk_vfio_user.so.5.0 00:04:27.693 CC lib/util/uuid.o 00:04:27.693 SYMLINK libspdk_vfio_user.so 00:04:27.693 CC lib/util/xor.o 00:04:27.952 CC lib/util/zipf.o 00:04:27.952 CC lib/util/md5.o 00:04:28.226 LIB libspdk_util.a 00:04:28.226 SO libspdk_util.so.10.1 00:04:28.226 LIB libspdk_trace_parser.a 00:04:28.537 SYMLINK libspdk_util.so 00:04:28.537 SO libspdk_trace_parser.so.6.0 00:04:28.537 SYMLINK libspdk_trace_parser.so 00:04:28.537 CC lib/conf/conf.o 00:04:28.537 CC lib/idxd/idxd.o 00:04:28.537 CC lib/idxd/idxd_user.o 00:04:28.537 CC lib/env_dpdk/env.o 00:04:28.537 CC lib/idxd/idxd_kernel.o 00:04:28.537 CC lib/env_dpdk/memory.o 00:04:28.537 CC lib/env_dpdk/pci.o 00:04:28.537 CC lib/vmd/vmd.o 00:04:28.537 CC lib/json/json_parse.o 00:04:28.537 CC lib/rdma_utils/rdma_utils.o 00:04:28.825 CC lib/env_dpdk/init.o 00:04:28.825 LIB libspdk_conf.a 00:04:28.825 SO libspdk_conf.so.6.0 00:04:28.825 CC lib/env_dpdk/threads.o 00:04:28.825 SYMLINK libspdk_conf.so 00:04:28.825 CC lib/env_dpdk/pci_ioat.o 00:04:28.825 CC lib/json/json_util.o 00:04:28.825 LIB libspdk_rdma_utils.a 00:04:28.825 SO libspdk_rdma_utils.so.1.0 00:04:28.825 CC lib/env_dpdk/pci_virtio.o 00:04:28.825 SYMLINK libspdk_rdma_utils.so 00:04:28.825 CC lib/json/json_write.o 00:04:28.825 CC lib/env_dpdk/pci_vmd.o 00:04:28.825 CC lib/vmd/led.o 00:04:28.825 CC lib/env_dpdk/pci_idxd.o 00:04:29.084 CC lib/env_dpdk/pci_event.o 00:04:29.084 CC lib/env_dpdk/sigbus_handler.o 00:04:29.084 CC lib/env_dpdk/pci_dpdk.o 00:04:29.084 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:29.084 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:29.084 CC lib/rdma_provider/common.o 00:04:29.084 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:29.084 LIB libspdk_json.a 00:04:29.084 LIB libspdk_idxd.a 00:04:29.343 SO libspdk_json.so.6.0 00:04:29.343 SO libspdk_idxd.so.12.1 00:04:29.343 LIB libspdk_vmd.a 00:04:29.343 SYMLINK libspdk_json.so 00:04:29.343 SO libspdk_vmd.so.6.0 00:04:29.343 SYMLINK libspdk_idxd.so 00:04:29.343 SYMLINK libspdk_vmd.so 00:04:29.343 LIB libspdk_rdma_provider.a 00:04:29.343 SO libspdk_rdma_provider.so.7.0 00:04:29.602 SYMLINK libspdk_rdma_provider.so 00:04:29.602 CC lib/jsonrpc/jsonrpc_server.o 00:04:29.602 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:29.602 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:29.602 CC lib/jsonrpc/jsonrpc_client.o 00:04:29.863 LIB libspdk_jsonrpc.a 00:04:29.863 SO libspdk_jsonrpc.so.6.0 00:04:30.124 LIB libspdk_env_dpdk.a 00:04:30.124 SYMLINK libspdk_jsonrpc.so 00:04:30.124 SO libspdk_env_dpdk.so.15.1 00:04:30.124 SYMLINK libspdk_env_dpdk.so 00:04:30.384 CC lib/rpc/rpc.o 00:04:30.645 LIB libspdk_rpc.a 00:04:30.645 SO libspdk_rpc.so.6.0 00:04:30.906 SYMLINK libspdk_rpc.so 00:04:31.165 CC lib/notify/notify_rpc.o 00:04:31.165 CC lib/notify/notify.o 00:04:31.165 CC lib/trace/trace.o 00:04:31.165 CC lib/trace/trace_rpc.o 00:04:31.165 CC lib/trace/trace_flags.o 00:04:31.165 CC lib/keyring/keyring_rpc.o 00:04:31.165 CC lib/keyring/keyring.o 00:04:31.425 LIB libspdk_notify.a 00:04:31.425 SO libspdk_notify.so.6.0 00:04:31.425 LIB libspdk_keyring.a 00:04:31.425 SYMLINK libspdk_notify.so 00:04:31.425 LIB libspdk_trace.a 00:04:31.425 SO libspdk_keyring.so.2.0 00:04:31.425 SO libspdk_trace.so.11.0 00:04:31.425 SYMLINK libspdk_keyring.so 00:04:31.685 SYMLINK libspdk_trace.so 00:04:31.949 CC lib/thread/thread.o 00:04:31.949 CC lib/thread/iobuf.o 00:04:31.949 CC lib/sock/sock.o 00:04:31.949 CC lib/sock/sock_rpc.o 00:04:32.521 LIB libspdk_sock.a 00:04:32.521 SO libspdk_sock.so.10.0 00:04:32.521 SYMLINK libspdk_sock.so 00:04:33.089 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:33.089 CC lib/nvme/nvme_ctrlr.o 00:04:33.089 CC lib/nvme/nvme_fabric.o 00:04:33.089 CC lib/nvme/nvme_ns_cmd.o 00:04:33.089 CC lib/nvme/nvme_ns.o 00:04:33.089 CC lib/nvme/nvme_pcie_common.o 00:04:33.089 CC lib/nvme/nvme_pcie.o 00:04:33.089 CC lib/nvme/nvme.o 00:04:33.089 CC lib/nvme/nvme_qpair.o 00:04:33.655 LIB libspdk_thread.a 00:04:33.655 SO libspdk_thread.so.11.0 00:04:33.655 CC lib/nvme/nvme_quirks.o 00:04:33.655 CC lib/nvme/nvme_transport.o 00:04:33.655 SYMLINK libspdk_thread.so 00:04:33.655 CC lib/nvme/nvme_discovery.o 00:04:33.655 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:33.655 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:33.914 CC lib/nvme/nvme_tcp.o 00:04:33.914 CC lib/accel/accel.o 00:04:33.914 CC lib/blob/blobstore.o 00:04:34.172 CC lib/nvme/nvme_opal.o 00:04:34.172 CC lib/accel/accel_rpc.o 00:04:34.172 CC lib/nvme/nvme_io_msg.o 00:04:34.172 CC lib/blob/request.o 00:04:34.430 CC lib/accel/accel_sw.o 00:04:34.430 CC lib/init/json_config.o 00:04:34.430 CC lib/virtio/virtio.o 00:04:34.688 CC lib/virtio/virtio_vhost_user.o 00:04:34.689 CC lib/blob/zeroes.o 00:04:34.689 CC lib/init/subsystem.o 00:04:34.689 CC lib/virtio/virtio_vfio_user.o 00:04:34.947 CC lib/init/subsystem_rpc.o 00:04:34.947 CC lib/blob/blob_bs_dev.o 00:04:34.947 CC lib/virtio/virtio_pci.o 00:04:34.947 CC lib/fsdev/fsdev.o 00:04:34.947 CC lib/nvme/nvme_poll_group.o 00:04:34.947 CC lib/init/rpc.o 00:04:34.947 CC lib/fsdev/fsdev_io.o 00:04:35.206 CC lib/fsdev/fsdev_rpc.o 00:04:35.206 LIB libspdk_init.a 00:04:35.206 LIB libspdk_virtio.a 00:04:35.206 SO libspdk_init.so.6.0 00:04:35.206 SO libspdk_virtio.so.7.0 00:04:35.206 CC lib/nvme/nvme_zns.o 00:04:35.206 SYMLINK libspdk_init.so 00:04:35.206 CC lib/nvme/nvme_stubs.o 00:04:35.206 LIB libspdk_accel.a 00:04:35.206 SYMLINK libspdk_virtio.so 00:04:35.206 SO libspdk_accel.so.16.0 00:04:35.206 CC lib/nvme/nvme_auth.o 00:04:35.465 SYMLINK libspdk_accel.so 00:04:35.465 CC lib/nvme/nvme_cuse.o 00:04:35.465 CC lib/nvme/nvme_rdma.o 00:04:35.465 CC lib/event/app.o 00:04:35.465 CC lib/event/reactor.o 00:04:35.465 LIB libspdk_fsdev.a 00:04:35.465 CC lib/bdev/bdev.o 00:04:35.465 SO libspdk_fsdev.so.2.0 00:04:35.723 SYMLINK libspdk_fsdev.so 00:04:35.723 CC lib/event/log_rpc.o 00:04:35.723 CC lib/event/app_rpc.o 00:04:35.723 CC lib/event/scheduler_static.o 00:04:35.723 CC lib/bdev/bdev_rpc.o 00:04:35.982 CC lib/bdev/bdev_zone.o 00:04:35.982 CC lib/bdev/part.o 00:04:35.982 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:35.982 LIB libspdk_event.a 00:04:35.982 SO libspdk_event.so.14.0 00:04:35.982 SYMLINK libspdk_event.so 00:04:35.982 CC lib/bdev/scsi_nvme.o 00:04:36.551 LIB libspdk_fuse_dispatcher.a 00:04:36.551 SO libspdk_fuse_dispatcher.so.1.0 00:04:36.551 SYMLINK libspdk_fuse_dispatcher.so 00:04:36.551 LIB libspdk_nvme.a 00:04:36.811 SO libspdk_nvme.so.15.0 00:04:37.069 SYMLINK libspdk_nvme.so 00:04:37.326 LIB libspdk_blob.a 00:04:37.326 SO libspdk_blob.so.11.0 00:04:37.585 SYMLINK libspdk_blob.so 00:04:37.843 CC lib/blobfs/blobfs.o 00:04:37.843 CC lib/blobfs/tree.o 00:04:37.843 CC lib/lvol/lvol.o 00:04:38.412 LIB libspdk_bdev.a 00:04:38.412 SO libspdk_bdev.so.17.0 00:04:38.669 SYMLINK libspdk_bdev.so 00:04:38.669 LIB libspdk_blobfs.a 00:04:38.669 SO libspdk_blobfs.so.10.0 00:04:38.926 CC lib/ublk/ublk.o 00:04:38.926 CC lib/ublk/ublk_rpc.o 00:04:38.926 SYMLINK libspdk_blobfs.so 00:04:38.926 CC lib/scsi/dev.o 00:04:38.926 CC lib/scsi/lun.o 00:04:38.926 CC lib/scsi/port.o 00:04:38.926 CC lib/ftl/ftl_core.o 00:04:38.926 CC lib/scsi/scsi.o 00:04:38.926 CC lib/nvmf/ctrlr.o 00:04:38.926 CC lib/nbd/nbd.o 00:04:38.926 LIB libspdk_lvol.a 00:04:38.926 SO libspdk_lvol.so.10.0 00:04:38.926 CC lib/scsi/scsi_bdev.o 00:04:38.926 SYMLINK libspdk_lvol.so 00:04:38.926 CC lib/ftl/ftl_init.o 00:04:38.926 CC lib/nbd/nbd_rpc.o 00:04:38.926 CC lib/ftl/ftl_layout.o 00:04:39.184 CC lib/scsi/scsi_pr.o 00:04:39.184 CC lib/scsi/scsi_rpc.o 00:04:39.184 CC lib/scsi/task.o 00:04:39.184 CC lib/ftl/ftl_debug.o 00:04:39.184 CC lib/nvmf/ctrlr_discovery.o 00:04:39.184 CC lib/nvmf/ctrlr_bdev.o 00:04:39.184 LIB libspdk_nbd.a 00:04:39.441 CC lib/ftl/ftl_io.o 00:04:39.441 SO libspdk_nbd.so.7.0 00:04:39.441 CC lib/nvmf/subsystem.o 00:04:39.441 CC lib/nvmf/nvmf.o 00:04:39.441 SYMLINK libspdk_nbd.so 00:04:39.441 CC lib/ftl/ftl_sb.o 00:04:39.441 CC lib/nvmf/nvmf_rpc.o 00:04:39.441 LIB libspdk_ublk.a 00:04:39.441 SO libspdk_ublk.so.3.0 00:04:39.441 LIB libspdk_scsi.a 00:04:39.441 CC lib/ftl/ftl_l2p.o 00:04:39.699 CC lib/nvmf/transport.o 00:04:39.699 SYMLINK libspdk_ublk.so 00:04:39.699 CC lib/nvmf/tcp.o 00:04:39.699 SO libspdk_scsi.so.9.0 00:04:39.699 SYMLINK libspdk_scsi.so 00:04:39.699 CC lib/nvmf/stubs.o 00:04:39.699 CC lib/ftl/ftl_l2p_flat.o 00:04:39.699 CC lib/ftl/ftl_nv_cache.o 00:04:39.957 CC lib/ftl/ftl_band.o 00:04:39.957 CC lib/ftl/ftl_band_ops.o 00:04:40.214 CC lib/nvmf/mdns_server.o 00:04:40.214 CC lib/nvmf/rdma.o 00:04:40.472 CC lib/nvmf/auth.o 00:04:40.472 CC lib/ftl/ftl_writer.o 00:04:40.472 CC lib/ftl/ftl_rq.o 00:04:40.472 CC lib/ftl/ftl_reloc.o 00:04:40.730 CC lib/iscsi/conn.o 00:04:40.730 CC lib/iscsi/init_grp.o 00:04:40.730 CC lib/iscsi/iscsi.o 00:04:40.730 CC lib/iscsi/param.o 00:04:40.730 CC lib/iscsi/portal_grp.o 00:04:40.730 CC lib/iscsi/tgt_node.o 00:04:40.987 CC lib/iscsi/iscsi_subsystem.o 00:04:40.987 CC lib/ftl/ftl_l2p_cache.o 00:04:40.987 CC lib/iscsi/iscsi_rpc.o 00:04:41.246 CC lib/iscsi/task.o 00:04:41.246 CC lib/ftl/ftl_p2l.o 00:04:41.246 CC lib/ftl/ftl_p2l_log.o 00:04:41.246 CC lib/ftl/mngt/ftl_mngt.o 00:04:41.246 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:41.246 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:41.505 CC lib/vhost/vhost.o 00:04:41.505 CC lib/vhost/vhost_rpc.o 00:04:41.505 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:41.505 CC lib/vhost/vhost_scsi.o 00:04:41.505 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:41.763 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:41.763 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:41.763 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:41.763 CC lib/vhost/vhost_blk.o 00:04:41.763 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:41.763 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:42.021 CC lib/vhost/rte_vhost_user.o 00:04:42.021 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:42.021 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:42.021 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:42.021 CC lib/ftl/utils/ftl_conf.o 00:04:42.021 LIB libspdk_iscsi.a 00:04:42.021 CC lib/ftl/utils/ftl_md.o 00:04:42.280 CC lib/ftl/utils/ftl_mempool.o 00:04:42.280 SO libspdk_iscsi.so.8.0 00:04:42.280 CC lib/ftl/utils/ftl_bitmap.o 00:04:42.280 CC lib/ftl/utils/ftl_property.o 00:04:42.280 SYMLINK libspdk_iscsi.so 00:04:42.280 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:42.280 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:42.566 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:42.566 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:42.566 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:42.566 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:42.566 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:42.566 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:42.566 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:42.566 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:42.566 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:42.566 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:42.566 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:42.824 LIB libspdk_nvmf.a 00:04:42.824 CC lib/ftl/base/ftl_base_dev.o 00:04:42.824 CC lib/ftl/base/ftl_base_bdev.o 00:04:42.824 CC lib/ftl/ftl_trace.o 00:04:42.824 SO libspdk_nvmf.so.20.0 00:04:42.824 LIB libspdk_vhost.a 00:04:42.824 SO libspdk_vhost.so.8.0 00:04:43.082 LIB libspdk_ftl.a 00:04:43.082 SYMLINK libspdk_vhost.so 00:04:43.082 SYMLINK libspdk_nvmf.so 00:04:43.340 SO libspdk_ftl.so.9.0 00:04:43.598 SYMLINK libspdk_ftl.so 00:04:43.855 CC module/env_dpdk/env_dpdk_rpc.o 00:04:44.113 CC module/sock/posix/posix.o 00:04:44.113 CC module/scheduler/gscheduler/gscheduler.o 00:04:44.113 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:44.113 CC module/fsdev/aio/fsdev_aio.o 00:04:44.113 CC module/keyring/linux/keyring.o 00:04:44.113 CC module/blob/bdev/blob_bdev.o 00:04:44.113 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:44.113 CC module/keyring/file/keyring.o 00:04:44.113 CC module/accel/error/accel_error.o 00:04:44.113 LIB libspdk_env_dpdk_rpc.a 00:04:44.113 SO libspdk_env_dpdk_rpc.so.6.0 00:04:44.113 CC module/keyring/linux/keyring_rpc.o 00:04:44.113 LIB libspdk_scheduler_gscheduler.a 00:04:44.113 LIB libspdk_scheduler_dpdk_governor.a 00:04:44.113 SYMLINK libspdk_env_dpdk_rpc.so 00:04:44.113 CC module/keyring/file/keyring_rpc.o 00:04:44.113 CC module/accel/error/accel_error_rpc.o 00:04:44.113 SO libspdk_scheduler_gscheduler.so.4.0 00:04:44.113 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:44.113 LIB libspdk_scheduler_dynamic.a 00:04:44.113 SO libspdk_scheduler_dynamic.so.4.0 00:04:44.113 SYMLINK libspdk_scheduler_gscheduler.so 00:04:44.370 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:44.370 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:44.370 CC module/fsdev/aio/linux_aio_mgr.o 00:04:44.370 SYMLINK libspdk_scheduler_dynamic.so 00:04:44.370 LIB libspdk_keyring_linux.a 00:04:44.370 LIB libspdk_keyring_file.a 00:04:44.371 LIB libspdk_blob_bdev.a 00:04:44.371 LIB libspdk_accel_error.a 00:04:44.371 SO libspdk_keyring_linux.so.1.0 00:04:44.371 SO libspdk_blob_bdev.so.11.0 00:04:44.371 SO libspdk_keyring_file.so.2.0 00:04:44.371 SO libspdk_accel_error.so.2.0 00:04:44.371 SYMLINK libspdk_keyring_linux.so 00:04:44.371 SYMLINK libspdk_blob_bdev.so 00:04:44.371 SYMLINK libspdk_keyring_file.so 00:04:44.371 SYMLINK libspdk_accel_error.so 00:04:44.371 CC module/accel/ioat/accel_ioat.o 00:04:44.371 CC module/accel/ioat/accel_ioat_rpc.o 00:04:44.371 CC module/accel/dsa/accel_dsa.o 00:04:44.371 CC module/accel/dsa/accel_dsa_rpc.o 00:04:44.628 CC module/accel/iaa/accel_iaa.o 00:04:44.628 CC module/accel/iaa/accel_iaa_rpc.o 00:04:44.628 LIB libspdk_accel_ioat.a 00:04:44.628 CC module/bdev/delay/vbdev_delay.o 00:04:44.628 CC module/bdev/error/vbdev_error.o 00:04:44.628 CC module/blobfs/bdev/blobfs_bdev.o 00:04:44.628 SO libspdk_accel_ioat.so.6.0 00:04:44.628 SYMLINK libspdk_accel_ioat.so 00:04:44.886 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:44.886 CC module/bdev/gpt/gpt.o 00:04:44.886 LIB libspdk_fsdev_aio.a 00:04:44.886 CC module/bdev/gpt/vbdev_gpt.o 00:04:44.886 LIB libspdk_accel_dsa.a 00:04:44.886 SO libspdk_fsdev_aio.so.1.0 00:04:44.886 LIB libspdk_accel_iaa.a 00:04:44.886 SO libspdk_accel_dsa.so.5.0 00:04:44.886 LIB libspdk_sock_posix.a 00:04:44.886 SO libspdk_accel_iaa.so.3.0 00:04:44.886 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:44.886 SO libspdk_sock_posix.so.6.0 00:04:44.886 SYMLINK libspdk_fsdev_aio.so 00:04:44.886 CC module/bdev/error/vbdev_error_rpc.o 00:04:44.886 SYMLINK libspdk_accel_dsa.so 00:04:44.886 SYMLINK libspdk_accel_iaa.so 00:04:44.886 SYMLINK libspdk_sock_posix.so 00:04:45.144 LIB libspdk_blobfs_bdev.a 00:04:45.144 LIB libspdk_bdev_delay.a 00:04:45.144 LIB libspdk_bdev_error.a 00:04:45.144 LIB libspdk_bdev_gpt.a 00:04:45.144 CC module/bdev/lvol/vbdev_lvol.o 00:04:45.144 CC module/bdev/malloc/bdev_malloc.o 00:04:45.144 CC module/bdev/null/bdev_null.o 00:04:45.144 SO libspdk_blobfs_bdev.so.6.0 00:04:45.144 SO libspdk_bdev_error.so.6.0 00:04:45.144 SO libspdk_bdev_delay.so.6.0 00:04:45.144 SO libspdk_bdev_gpt.so.6.0 00:04:45.144 CC module/bdev/nvme/bdev_nvme.o 00:04:45.144 CC module/bdev/passthru/vbdev_passthru.o 00:04:45.144 CC module/bdev/raid/bdev_raid.o 00:04:45.144 SYMLINK libspdk_blobfs_bdev.so 00:04:45.144 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:45.144 SYMLINK libspdk_bdev_error.so 00:04:45.144 SYMLINK libspdk_bdev_delay.so 00:04:45.144 SYMLINK libspdk_bdev_gpt.so 00:04:45.144 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:45.144 CC module/bdev/nvme/nvme_rpc.o 00:04:45.402 CC module/bdev/split/vbdev_split.o 00:04:45.402 CC module/bdev/nvme/bdev_mdns_client.o 00:04:45.402 CC module/bdev/null/bdev_null_rpc.o 00:04:45.402 CC module/bdev/nvme/vbdev_opal.o 00:04:45.402 LIB libspdk_bdev_passthru.a 00:04:45.402 SO libspdk_bdev_passthru.so.6.0 00:04:45.402 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:45.402 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:45.402 CC module/bdev/split/vbdev_split_rpc.o 00:04:45.402 LIB libspdk_bdev_null.a 00:04:45.402 SYMLINK libspdk_bdev_passthru.so 00:04:45.402 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:45.402 SO libspdk_bdev_null.so.6.0 00:04:45.661 SYMLINK libspdk_bdev_null.so 00:04:45.661 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:45.661 LIB libspdk_bdev_malloc.a 00:04:45.661 CC module/bdev/raid/bdev_raid_rpc.o 00:04:45.661 LIB libspdk_bdev_split.a 00:04:45.661 CC module/bdev/raid/bdev_raid_sb.o 00:04:45.661 SO libspdk_bdev_malloc.so.6.0 00:04:45.661 SO libspdk_bdev_split.so.6.0 00:04:45.661 SYMLINK libspdk_bdev_malloc.so 00:04:45.661 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:45.661 CC module/bdev/raid/raid0.o 00:04:45.661 SYMLINK libspdk_bdev_split.so 00:04:45.661 CC module/bdev/raid/raid1.o 00:04:45.919 CC module/bdev/raid/concat.o 00:04:45.919 LIB libspdk_bdev_lvol.a 00:04:45.919 CC module/bdev/aio/bdev_aio.o 00:04:45.919 CC module/bdev/aio/bdev_aio_rpc.o 00:04:45.919 CC module/bdev/ftl/bdev_ftl.o 00:04:45.919 SO libspdk_bdev_lvol.so.6.0 00:04:45.919 CC module/bdev/raid/raid5f.o 00:04:45.919 SYMLINK libspdk_bdev_lvol.so 00:04:45.919 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:45.919 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:46.180 CC module/bdev/iscsi/bdev_iscsi.o 00:04:46.180 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:46.180 LIB libspdk_bdev_zone_block.a 00:04:46.180 LIB libspdk_bdev_ftl.a 00:04:46.180 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:46.180 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:46.180 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:46.180 SO libspdk_bdev_zone_block.so.6.0 00:04:46.180 SO libspdk_bdev_ftl.so.6.0 00:04:46.180 LIB libspdk_bdev_aio.a 00:04:46.180 SO libspdk_bdev_aio.so.6.0 00:04:46.180 SYMLINK libspdk_bdev_zone_block.so 00:04:46.180 SYMLINK libspdk_bdev_ftl.so 00:04:46.448 SYMLINK libspdk_bdev_aio.so 00:04:46.448 LIB libspdk_bdev_raid.a 00:04:46.448 LIB libspdk_bdev_iscsi.a 00:04:46.448 SO libspdk_bdev_iscsi.so.6.0 00:04:46.448 SO libspdk_bdev_raid.so.6.0 00:04:46.722 SYMLINK libspdk_bdev_iscsi.so 00:04:46.722 SYMLINK libspdk_bdev_raid.so 00:04:46.722 LIB libspdk_bdev_virtio.a 00:04:46.722 SO libspdk_bdev_virtio.so.6.0 00:04:46.982 SYMLINK libspdk_bdev_virtio.so 00:04:47.920 LIB libspdk_bdev_nvme.a 00:04:47.920 SO libspdk_bdev_nvme.so.7.1 00:04:47.920 SYMLINK libspdk_bdev_nvme.so 00:04:48.490 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:48.490 CC module/event/subsystems/vmd/vmd.o 00:04:48.490 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:48.490 CC module/event/subsystems/fsdev/fsdev.o 00:04:48.490 CC module/event/subsystems/iobuf/iobuf.o 00:04:48.490 CC module/event/subsystems/sock/sock.o 00:04:48.490 CC module/event/subsystems/keyring/keyring.o 00:04:48.490 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:48.490 CC module/event/subsystems/scheduler/scheduler.o 00:04:48.749 LIB libspdk_event_vhost_blk.a 00:04:48.749 LIB libspdk_event_scheduler.a 00:04:48.749 LIB libspdk_event_fsdev.a 00:04:48.749 LIB libspdk_event_vmd.a 00:04:48.749 LIB libspdk_event_sock.a 00:04:48.749 LIB libspdk_event_iobuf.a 00:04:48.749 LIB libspdk_event_keyring.a 00:04:48.749 SO libspdk_event_vhost_blk.so.3.0 00:04:48.749 SO libspdk_event_scheduler.so.4.0 00:04:48.749 SO libspdk_event_fsdev.so.1.0 00:04:48.749 SO libspdk_event_sock.so.5.0 00:04:48.749 SO libspdk_event_keyring.so.1.0 00:04:48.749 SO libspdk_event_vmd.so.6.0 00:04:48.749 SO libspdk_event_iobuf.so.3.0 00:04:48.749 SYMLINK libspdk_event_vhost_blk.so 00:04:48.749 SYMLINK libspdk_event_scheduler.so 00:04:48.749 SYMLINK libspdk_event_fsdev.so 00:04:48.749 SYMLINK libspdk_event_keyring.so 00:04:48.749 SYMLINK libspdk_event_sock.so 00:04:48.749 SYMLINK libspdk_event_iobuf.so 00:04:48.749 SYMLINK libspdk_event_vmd.so 00:04:49.009 CC module/event/subsystems/accel/accel.o 00:04:49.269 LIB libspdk_event_accel.a 00:04:49.269 SO libspdk_event_accel.so.6.0 00:04:49.529 SYMLINK libspdk_event_accel.so 00:04:49.788 CC module/event/subsystems/bdev/bdev.o 00:04:50.049 LIB libspdk_event_bdev.a 00:04:50.049 SO libspdk_event_bdev.so.6.0 00:04:50.049 SYMLINK libspdk_event_bdev.so 00:04:50.618 CC module/event/subsystems/nbd/nbd.o 00:04:50.618 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:50.618 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:50.618 CC module/event/subsystems/ublk/ublk.o 00:04:50.618 CC module/event/subsystems/scsi/scsi.o 00:04:50.618 LIB libspdk_event_ublk.a 00:04:50.618 LIB libspdk_event_nbd.a 00:04:50.618 SO libspdk_event_ublk.so.3.0 00:04:50.618 LIB libspdk_event_scsi.a 00:04:50.618 SO libspdk_event_nbd.so.6.0 00:04:50.618 SO libspdk_event_scsi.so.6.0 00:04:50.618 SYMLINK libspdk_event_ublk.so 00:04:50.618 LIB libspdk_event_nvmf.a 00:04:50.618 SYMLINK libspdk_event_nbd.so 00:04:50.618 SO libspdk_event_nvmf.so.6.0 00:04:50.618 SYMLINK libspdk_event_scsi.so 00:04:50.876 SYMLINK libspdk_event_nvmf.so 00:04:51.135 CC module/event/subsystems/iscsi/iscsi.o 00:04:51.135 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:51.135 LIB libspdk_event_vhost_scsi.a 00:04:51.135 LIB libspdk_event_iscsi.a 00:04:51.394 SO libspdk_event_vhost_scsi.so.3.0 00:04:51.394 SO libspdk_event_iscsi.so.6.0 00:04:51.394 SYMLINK libspdk_event_vhost_scsi.so 00:04:51.394 SYMLINK libspdk_event_iscsi.so 00:04:51.653 SO libspdk.so.6.0 00:04:51.653 SYMLINK libspdk.so 00:04:51.913 CXX app/trace/trace.o 00:04:51.913 CC test/rpc_client/rpc_client_test.o 00:04:51.913 TEST_HEADER include/spdk/accel.h 00:04:51.913 TEST_HEADER include/spdk/accel_module.h 00:04:51.913 TEST_HEADER include/spdk/assert.h 00:04:51.913 TEST_HEADER include/spdk/barrier.h 00:04:51.913 TEST_HEADER include/spdk/base64.h 00:04:51.913 TEST_HEADER include/spdk/bdev.h 00:04:51.913 TEST_HEADER include/spdk/bdev_module.h 00:04:51.913 TEST_HEADER include/spdk/bdev_zone.h 00:04:51.913 TEST_HEADER include/spdk/bit_array.h 00:04:51.913 TEST_HEADER include/spdk/bit_pool.h 00:04:51.913 TEST_HEADER include/spdk/blob_bdev.h 00:04:51.913 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:51.913 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:51.913 TEST_HEADER include/spdk/blobfs.h 00:04:51.913 TEST_HEADER include/spdk/blob.h 00:04:51.913 TEST_HEADER include/spdk/conf.h 00:04:51.913 TEST_HEADER include/spdk/config.h 00:04:51.913 TEST_HEADER include/spdk/cpuset.h 00:04:51.913 TEST_HEADER include/spdk/crc16.h 00:04:51.913 TEST_HEADER include/spdk/crc32.h 00:04:51.913 TEST_HEADER include/spdk/crc64.h 00:04:51.913 TEST_HEADER include/spdk/dif.h 00:04:51.913 TEST_HEADER include/spdk/dma.h 00:04:51.913 TEST_HEADER include/spdk/endian.h 00:04:51.913 TEST_HEADER include/spdk/env_dpdk.h 00:04:51.913 TEST_HEADER include/spdk/env.h 00:04:51.913 TEST_HEADER include/spdk/event.h 00:04:51.913 TEST_HEADER include/spdk/fd_group.h 00:04:51.913 TEST_HEADER include/spdk/fd.h 00:04:51.913 TEST_HEADER include/spdk/file.h 00:04:51.913 TEST_HEADER include/spdk/fsdev.h 00:04:51.913 TEST_HEADER include/spdk/fsdev_module.h 00:04:51.913 TEST_HEADER include/spdk/ftl.h 00:04:51.913 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:51.913 TEST_HEADER include/spdk/gpt_spec.h 00:04:51.913 TEST_HEADER include/spdk/hexlify.h 00:04:51.913 CC test/thread/poller_perf/poller_perf.o 00:04:51.913 TEST_HEADER include/spdk/histogram_data.h 00:04:51.913 TEST_HEADER include/spdk/idxd.h 00:04:51.913 TEST_HEADER include/spdk/idxd_spec.h 00:04:51.913 TEST_HEADER include/spdk/init.h 00:04:51.913 CC examples/ioat/perf/perf.o 00:04:51.913 TEST_HEADER include/spdk/ioat.h 00:04:51.913 CC examples/util/zipf/zipf.o 00:04:51.913 TEST_HEADER include/spdk/ioat_spec.h 00:04:51.913 TEST_HEADER include/spdk/iscsi_spec.h 00:04:51.913 TEST_HEADER include/spdk/json.h 00:04:51.913 TEST_HEADER include/spdk/jsonrpc.h 00:04:51.913 TEST_HEADER include/spdk/keyring.h 00:04:51.913 TEST_HEADER include/spdk/keyring_module.h 00:04:51.913 TEST_HEADER include/spdk/likely.h 00:04:51.913 TEST_HEADER include/spdk/log.h 00:04:51.913 TEST_HEADER include/spdk/lvol.h 00:04:51.913 TEST_HEADER include/spdk/md5.h 00:04:51.913 TEST_HEADER include/spdk/memory.h 00:04:51.913 TEST_HEADER include/spdk/mmio.h 00:04:51.913 TEST_HEADER include/spdk/nbd.h 00:04:52.172 TEST_HEADER include/spdk/net.h 00:04:52.172 TEST_HEADER include/spdk/notify.h 00:04:52.172 TEST_HEADER include/spdk/nvme.h 00:04:52.172 CC test/dma/test_dma/test_dma.o 00:04:52.172 TEST_HEADER include/spdk/nvme_intel.h 00:04:52.172 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:52.172 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:52.172 TEST_HEADER include/spdk/nvme_spec.h 00:04:52.172 TEST_HEADER include/spdk/nvme_zns.h 00:04:52.172 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:52.172 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:52.172 CC test/app/bdev_svc/bdev_svc.o 00:04:52.172 TEST_HEADER include/spdk/nvmf.h 00:04:52.172 TEST_HEADER include/spdk/nvmf_spec.h 00:04:52.172 TEST_HEADER include/spdk/nvmf_transport.h 00:04:52.172 TEST_HEADER include/spdk/opal.h 00:04:52.172 TEST_HEADER include/spdk/opal_spec.h 00:04:52.172 TEST_HEADER include/spdk/pci_ids.h 00:04:52.172 TEST_HEADER include/spdk/pipe.h 00:04:52.172 TEST_HEADER include/spdk/queue.h 00:04:52.172 TEST_HEADER include/spdk/reduce.h 00:04:52.172 TEST_HEADER include/spdk/rpc.h 00:04:52.172 TEST_HEADER include/spdk/scheduler.h 00:04:52.172 CC test/env/mem_callbacks/mem_callbacks.o 00:04:52.172 TEST_HEADER include/spdk/scsi.h 00:04:52.172 TEST_HEADER include/spdk/scsi_spec.h 00:04:52.172 TEST_HEADER include/spdk/sock.h 00:04:52.172 TEST_HEADER include/spdk/stdinc.h 00:04:52.172 TEST_HEADER include/spdk/string.h 00:04:52.172 TEST_HEADER include/spdk/thread.h 00:04:52.172 TEST_HEADER include/spdk/trace.h 00:04:52.172 TEST_HEADER include/spdk/trace_parser.h 00:04:52.172 TEST_HEADER include/spdk/tree.h 00:04:52.172 TEST_HEADER include/spdk/ublk.h 00:04:52.173 TEST_HEADER include/spdk/util.h 00:04:52.173 TEST_HEADER include/spdk/uuid.h 00:04:52.173 TEST_HEADER include/spdk/version.h 00:04:52.173 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:52.173 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:52.173 TEST_HEADER include/spdk/vhost.h 00:04:52.173 TEST_HEADER include/spdk/vmd.h 00:04:52.173 TEST_HEADER include/spdk/xor.h 00:04:52.173 TEST_HEADER include/spdk/zipf.h 00:04:52.173 CXX test/cpp_headers/accel.o 00:04:52.173 LINK rpc_client_test 00:04:52.173 LINK poller_perf 00:04:52.173 LINK interrupt_tgt 00:04:52.173 LINK zipf 00:04:52.173 LINK bdev_svc 00:04:52.173 LINK ioat_perf 00:04:52.173 CXX test/cpp_headers/accel_module.o 00:04:52.173 LINK mem_callbacks 00:04:52.432 LINK spdk_trace 00:04:52.432 CC test/env/vtophys/vtophys.o 00:04:52.432 CXX test/cpp_headers/assert.o 00:04:52.432 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:52.432 CC examples/ioat/verify/verify.o 00:04:52.432 CC app/trace_record/trace_record.o 00:04:52.432 CC test/env/memory/memory_ut.o 00:04:52.432 LINK test_dma 00:04:52.432 LINK vtophys 00:04:52.432 CXX test/cpp_headers/barrier.o 00:04:52.690 LINK env_dpdk_post_init 00:04:52.690 CC examples/thread/thread/thread_ex.o 00:04:52.690 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:52.690 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:52.690 CXX test/cpp_headers/base64.o 00:04:52.690 LINK verify 00:04:52.690 CXX test/cpp_headers/bdev.o 00:04:52.690 LINK spdk_trace_record 00:04:52.949 CC test/event/event_perf/event_perf.o 00:04:52.949 LINK thread 00:04:52.949 CXX test/cpp_headers/bdev_module.o 00:04:52.949 CC test/env/pci/pci_ut.o 00:04:52.949 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:52.949 CC test/accel/dif/dif.o 00:04:52.949 LINK event_perf 00:04:52.949 CC app/nvmf_tgt/nvmf_main.o 00:04:53.209 LINK nvme_fuzz 00:04:53.209 CXX test/cpp_headers/bdev_zone.o 00:04:53.209 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:53.209 LINK nvmf_tgt 00:04:53.209 CXX test/cpp_headers/bit_array.o 00:04:53.209 CC test/event/reactor/reactor.o 00:04:53.467 LINK pci_ut 00:04:53.467 CC examples/sock/hello_world/hello_sock.o 00:04:53.467 LINK memory_ut 00:04:53.467 CC app/iscsi_tgt/iscsi_tgt.o 00:04:53.467 LINK reactor 00:04:53.467 CXX test/cpp_headers/bit_pool.o 00:04:53.467 LINK vhost_fuzz 00:04:53.726 CXX test/cpp_headers/blob_bdev.o 00:04:53.726 LINK iscsi_tgt 00:04:53.726 LINK hello_sock 00:04:53.726 CC test/blobfs/mkfs/mkfs.o 00:04:53.726 CC test/event/reactor_perf/reactor_perf.o 00:04:53.726 LINK dif 00:04:53.726 CC test/nvme/aer/aer.o 00:04:53.726 CXX test/cpp_headers/blobfs_bdev.o 00:04:53.726 CC test/lvol/esnap/esnap.o 00:04:53.726 LINK mkfs 00:04:53.985 CC test/app/histogram_perf/histogram_perf.o 00:04:53.985 LINK reactor_perf 00:04:53.985 CXX test/cpp_headers/blobfs.o 00:04:53.985 CC app/spdk_tgt/spdk_tgt.o 00:04:53.985 CC examples/vmd/lsvmd/lsvmd.o 00:04:53.985 LINK histogram_perf 00:04:53.985 LINK aer 00:04:53.985 CXX test/cpp_headers/blob.o 00:04:53.985 CC test/event/app_repeat/app_repeat.o 00:04:54.244 CC app/spdk_lspci/spdk_lspci.o 00:04:54.244 CC examples/idxd/perf/perf.o 00:04:54.244 LINK lsvmd 00:04:54.244 LINK spdk_tgt 00:04:54.244 LINK spdk_lspci 00:04:54.244 LINK app_repeat 00:04:54.244 CC test/app/jsoncat/jsoncat.o 00:04:54.244 CXX test/cpp_headers/conf.o 00:04:54.244 CC test/nvme/reset/reset.o 00:04:54.502 CC examples/vmd/led/led.o 00:04:54.502 LINK jsoncat 00:04:54.502 LINK iscsi_fuzz 00:04:54.502 CXX test/cpp_headers/config.o 00:04:54.502 LINK idxd_perf 00:04:54.502 CC test/nvme/sgl/sgl.o 00:04:54.502 CXX test/cpp_headers/cpuset.o 00:04:54.502 LINK led 00:04:54.502 CC app/spdk_nvme_perf/perf.o 00:04:54.502 CC test/event/scheduler/scheduler.o 00:04:54.502 LINK reset 00:04:54.768 CC test/nvme/e2edp/nvme_dp.o 00:04:54.768 CXX test/cpp_headers/crc16.o 00:04:54.768 CC test/nvme/overhead/overhead.o 00:04:54.768 LINK sgl 00:04:54.768 CC test/app/stub/stub.o 00:04:54.768 CXX test/cpp_headers/crc32.o 00:04:54.768 LINK scheduler 00:04:54.768 CC test/nvme/err_injection/err_injection.o 00:04:55.036 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:55.036 LINK nvme_dp 00:04:55.036 CXX test/cpp_headers/crc64.o 00:04:55.036 LINK stub 00:04:55.036 LINK err_injection 00:04:55.036 LINK overhead 00:04:55.036 CC test/nvme/startup/startup.o 00:04:55.036 CXX test/cpp_headers/dif.o 00:04:55.036 CC test/nvme/reserve/reserve.o 00:04:55.293 CC test/bdev/bdevio/bdevio.o 00:04:55.293 LINK hello_fsdev 00:04:55.293 CC app/spdk_nvme_identify/identify.o 00:04:55.293 LINK startup 00:04:55.293 CXX test/cpp_headers/dma.o 00:04:55.293 CC test/nvme/simple_copy/simple_copy.o 00:04:55.293 CC test/nvme/connect_stress/connect_stress.o 00:04:55.293 LINK reserve 00:04:55.551 CXX test/cpp_headers/endian.o 00:04:55.551 LINK spdk_nvme_perf 00:04:55.551 LINK connect_stress 00:04:55.551 CC test/nvme/boot_partition/boot_partition.o 00:04:55.551 LINK simple_copy 00:04:55.551 LINK bdevio 00:04:55.551 CC examples/accel/perf/accel_perf.o 00:04:55.551 CXX test/cpp_headers/env_dpdk.o 00:04:55.551 CC test/nvme/compliance/nvme_compliance.o 00:04:55.810 LINK boot_partition 00:04:55.810 CXX test/cpp_headers/env.o 00:04:55.810 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:55.810 CC test/nvme/fused_ordering/fused_ordering.o 00:04:55.810 CC test/nvme/cuse/cuse.o 00:04:55.810 CC test/nvme/fdp/fdp.o 00:04:55.810 CXX test/cpp_headers/event.o 00:04:56.069 LINK doorbell_aers 00:04:56.069 LINK nvme_compliance 00:04:56.069 LINK fused_ordering 00:04:56.069 CXX test/cpp_headers/fd_group.o 00:04:56.069 CC examples/blob/hello_world/hello_blob.o 00:04:56.069 CXX test/cpp_headers/fd.o 00:04:56.069 CXX test/cpp_headers/file.o 00:04:56.069 LINK spdk_nvme_identify 00:04:56.069 LINK accel_perf 00:04:56.069 CXX test/cpp_headers/fsdev.o 00:04:56.328 LINK fdp 00:04:56.328 CXX test/cpp_headers/fsdev_module.o 00:04:56.328 LINK hello_blob 00:04:56.328 CC examples/nvme/hello_world/hello_world.o 00:04:56.328 CC examples/nvme/reconnect/reconnect.o 00:04:56.328 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:56.328 CC app/spdk_nvme_discover/discovery_aer.o 00:04:56.587 CXX test/cpp_headers/ftl.o 00:04:56.587 CC examples/blob/cli/blobcli.o 00:04:56.587 CC examples/nvme/arbitration/arbitration.o 00:04:56.587 LINK hello_world 00:04:56.587 LINK spdk_nvme_discover 00:04:56.587 CXX test/cpp_headers/fuse_dispatcher.o 00:04:56.587 CC examples/nvme/hotplug/hotplug.o 00:04:56.587 LINK reconnect 00:04:56.846 CXX test/cpp_headers/gpt_spec.o 00:04:56.846 LINK arbitration 00:04:56.846 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:56.846 CC app/spdk_top/spdk_top.o 00:04:56.846 LINK hotplug 00:04:56.846 CC examples/nvme/abort/abort.o 00:04:56.846 CXX test/cpp_headers/hexlify.o 00:04:56.846 LINK nvme_manage 00:04:57.104 LINK blobcli 00:04:57.104 LINK cmb_copy 00:04:57.104 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:57.104 LINK cuse 00:04:57.104 CXX test/cpp_headers/histogram_data.o 00:04:57.104 CXX test/cpp_headers/idxd.o 00:04:57.104 LINK pmr_persistence 00:04:57.363 CXX test/cpp_headers/idxd_spec.o 00:04:57.363 CC examples/bdev/hello_world/hello_bdev.o 00:04:57.363 CC examples/bdev/bdevperf/bdevperf.o 00:04:57.363 LINK abort 00:04:57.363 CXX test/cpp_headers/init.o 00:04:57.363 CC app/vhost/vhost.o 00:04:57.363 CC app/spdk_dd/spdk_dd.o 00:04:57.363 CC app/fio/nvme/fio_plugin.o 00:04:57.622 LINK hello_bdev 00:04:57.622 CXX test/cpp_headers/ioat.o 00:04:57.622 CC app/fio/bdev/fio_plugin.o 00:04:57.622 CXX test/cpp_headers/ioat_spec.o 00:04:57.622 LINK vhost 00:04:57.622 CXX test/cpp_headers/iscsi_spec.o 00:04:57.622 CXX test/cpp_headers/json.o 00:04:57.622 CXX test/cpp_headers/jsonrpc.o 00:04:57.880 LINK spdk_dd 00:04:57.880 CXX test/cpp_headers/keyring.o 00:04:57.880 LINK spdk_top 00:04:57.880 CXX test/cpp_headers/keyring_module.o 00:04:57.880 CXX test/cpp_headers/likely.o 00:04:57.880 CXX test/cpp_headers/log.o 00:04:57.880 CXX test/cpp_headers/lvol.o 00:04:57.880 CXX test/cpp_headers/md5.o 00:04:57.880 CXX test/cpp_headers/memory.o 00:04:57.880 CXX test/cpp_headers/mmio.o 00:04:58.139 CXX test/cpp_headers/nbd.o 00:04:58.139 CXX test/cpp_headers/net.o 00:04:58.139 LINK spdk_nvme 00:04:58.139 CXX test/cpp_headers/notify.o 00:04:58.140 LINK spdk_bdev 00:04:58.140 CXX test/cpp_headers/nvme.o 00:04:58.140 CXX test/cpp_headers/nvme_ocssd.o 00:04:58.140 CXX test/cpp_headers/nvme_intel.o 00:04:58.140 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:58.140 CXX test/cpp_headers/nvme_spec.o 00:04:58.140 CXX test/cpp_headers/nvmf_cmd.o 00:04:58.140 CXX test/cpp_headers/nvme_zns.o 00:04:58.140 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:58.140 LINK bdevperf 00:04:58.400 CXX test/cpp_headers/nvmf.o 00:04:58.400 CXX test/cpp_headers/nvmf_spec.o 00:04:58.400 CXX test/cpp_headers/nvmf_transport.o 00:04:58.400 CXX test/cpp_headers/opal.o 00:04:58.400 CXX test/cpp_headers/opal_spec.o 00:04:58.400 CXX test/cpp_headers/pci_ids.o 00:04:58.400 CXX test/cpp_headers/pipe.o 00:04:58.400 CXX test/cpp_headers/queue.o 00:04:58.400 CXX test/cpp_headers/reduce.o 00:04:58.400 CXX test/cpp_headers/rpc.o 00:04:58.400 CXX test/cpp_headers/scheduler.o 00:04:58.400 CXX test/cpp_headers/scsi.o 00:04:58.400 CXX test/cpp_headers/scsi_spec.o 00:04:58.400 CXX test/cpp_headers/sock.o 00:04:58.400 CXX test/cpp_headers/stdinc.o 00:04:58.400 CXX test/cpp_headers/string.o 00:04:58.660 CXX test/cpp_headers/thread.o 00:04:58.660 CXX test/cpp_headers/trace.o 00:04:58.660 CXX test/cpp_headers/trace_parser.o 00:04:58.660 CC examples/nvmf/nvmf/nvmf.o 00:04:58.660 CXX test/cpp_headers/tree.o 00:04:58.660 CXX test/cpp_headers/ublk.o 00:04:58.660 CXX test/cpp_headers/util.o 00:04:58.660 CXX test/cpp_headers/uuid.o 00:04:58.660 CXX test/cpp_headers/version.o 00:04:58.660 CXX test/cpp_headers/vfio_user_pci.o 00:04:58.660 CXX test/cpp_headers/vfio_user_spec.o 00:04:58.660 CXX test/cpp_headers/vhost.o 00:04:58.660 CXX test/cpp_headers/vmd.o 00:04:58.660 CXX test/cpp_headers/xor.o 00:04:58.660 CXX test/cpp_headers/zipf.o 00:04:58.920 LINK nvmf 00:04:59.861 LINK esnap 00:05:00.121 00:05:00.121 real 1m13.284s 00:05:00.121 user 5m45.114s 00:05:00.121 sys 1m11.623s 00:05:00.121 13:18:41 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:00.121 13:18:41 make -- common/autotest_common.sh@10 -- $ set +x 00:05:00.121 ************************************ 00:05:00.121 END TEST make 00:05:00.121 ************************************ 00:05:00.121 13:18:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:00.121 13:18:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:00.121 13:18:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:00.121 13:18:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.121 13:18:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:00.121 13:18:41 -- pm/common@44 -- $ pid=6191 00:05:00.121 13:18:41 -- pm/common@50 -- $ kill -TERM 6191 00:05:00.121 13:18:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.121 13:18:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:00.121 13:18:41 -- pm/common@44 -- $ pid=6193 00:05:00.121 13:18:41 -- pm/common@50 -- $ kill -TERM 6193 00:05:00.121 13:18:41 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:00.121 13:18:41 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:00.121 13:18:41 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.121 13:18:41 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.121 13:18:41 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.382 13:18:41 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.382 13:18:41 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.382 13:18:41 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.382 13:18:41 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.382 13:18:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.382 13:18:41 -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.382 13:18:41 -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.382 13:18:41 -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.382 13:18:41 -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.382 13:18:41 -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.382 13:18:41 -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.382 13:18:41 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.382 13:18:41 -- scripts/common.sh@344 -- # case "$op" in 00:05:00.382 13:18:41 -- scripts/common.sh@345 -- # : 1 00:05:00.382 13:18:41 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.382 13:18:41 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.382 13:18:41 -- scripts/common.sh@365 -- # decimal 1 00:05:00.382 13:18:41 -- scripts/common.sh@353 -- # local d=1 00:05:00.382 13:18:41 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.382 13:18:41 -- scripts/common.sh@355 -- # echo 1 00:05:00.382 13:18:41 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.382 13:18:41 -- scripts/common.sh@366 -- # decimal 2 00:05:00.382 13:18:41 -- scripts/common.sh@353 -- # local d=2 00:05:00.382 13:18:41 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.382 13:18:41 -- scripts/common.sh@355 -- # echo 2 00:05:00.382 13:18:41 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.382 13:18:41 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.382 13:18:41 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.382 13:18:41 -- scripts/common.sh@368 -- # return 0 00:05:00.382 13:18:41 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.382 13:18:41 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.382 --rc genhtml_branch_coverage=1 00:05:00.382 --rc genhtml_function_coverage=1 00:05:00.382 --rc genhtml_legend=1 00:05:00.382 --rc geninfo_all_blocks=1 00:05:00.382 --rc geninfo_unexecuted_blocks=1 00:05:00.382 00:05:00.382 ' 00:05:00.382 13:18:41 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.382 --rc genhtml_branch_coverage=1 00:05:00.382 --rc genhtml_function_coverage=1 00:05:00.382 --rc genhtml_legend=1 00:05:00.382 --rc geninfo_all_blocks=1 00:05:00.382 --rc geninfo_unexecuted_blocks=1 00:05:00.382 00:05:00.382 ' 00:05:00.382 13:18:41 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.382 --rc genhtml_branch_coverage=1 00:05:00.382 --rc genhtml_function_coverage=1 00:05:00.382 --rc genhtml_legend=1 00:05:00.382 --rc geninfo_all_blocks=1 00:05:00.382 --rc geninfo_unexecuted_blocks=1 00:05:00.382 00:05:00.382 ' 00:05:00.382 13:18:41 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.382 --rc genhtml_branch_coverage=1 00:05:00.382 --rc genhtml_function_coverage=1 00:05:00.382 --rc genhtml_legend=1 00:05:00.382 --rc geninfo_all_blocks=1 00:05:00.382 --rc geninfo_unexecuted_blocks=1 00:05:00.382 00:05:00.382 ' 00:05:00.382 13:18:41 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:00.382 13:18:41 -- nvmf/common.sh@7 -- # uname -s 00:05:00.382 13:18:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.382 13:18:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.382 13:18:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.382 13:18:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.382 13:18:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.382 13:18:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.382 13:18:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.382 13:18:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.382 13:18:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.382 13:18:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.382 13:18:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ffda71aa-3258-4bae-910a-531305c80dfb 00:05:00.382 13:18:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=ffda71aa-3258-4bae-910a-531305c80dfb 00:05:00.382 13:18:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.382 13:18:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.382 13:18:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.382 13:18:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.382 13:18:41 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:00.382 13:18:41 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:00.382 13:18:41 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.382 13:18:41 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.382 13:18:41 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.382 13:18:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.382 13:18:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.383 13:18:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.383 13:18:41 -- paths/export.sh@5 -- # export PATH 00:05:00.383 13:18:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.383 13:18:41 -- nvmf/common.sh@51 -- # : 0 00:05:00.383 13:18:41 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:00.383 13:18:41 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:00.383 13:18:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.383 13:18:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.383 13:18:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.383 13:18:41 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:00.383 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:00.383 13:18:41 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:00.383 13:18:41 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:00.383 13:18:41 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:00.383 13:18:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:00.383 13:18:41 -- spdk/autotest.sh@32 -- # uname -s 00:05:00.383 13:18:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:00.383 13:18:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:00.383 13:18:41 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:00.383 13:18:41 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:00.383 13:18:41 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:00.383 13:18:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:00.383 13:18:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:00.383 13:18:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:00.383 13:18:41 -- spdk/autotest.sh@48 -- # udevadm_pid=66487 00:05:00.383 13:18:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:00.383 13:18:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:00.383 13:18:41 -- pm/common@17 -- # local monitor 00:05:00.383 13:18:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.383 13:18:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.383 13:18:41 -- pm/common@25 -- # sleep 1 00:05:00.383 13:18:41 -- pm/common@21 -- # date +%s 00:05:00.383 13:18:41 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732108721 00:05:00.383 13:18:41 -- pm/common@21 -- # date +%s 00:05:00.383 13:18:41 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732108721 00:05:00.383 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732108721_collect-cpu-load.pm.log 00:05:00.383 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732108721_collect-vmstat.pm.log 00:05:01.322 13:18:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:01.322 13:18:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:01.322 13:18:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.322 13:18:42 -- common/autotest_common.sh@10 -- # set +x 00:05:01.322 13:18:42 -- spdk/autotest.sh@59 -- # create_test_list 00:05:01.322 13:18:42 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:01.322 13:18:42 -- common/autotest_common.sh@10 -- # set +x 00:05:01.581 13:18:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:01.581 13:18:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:01.581 13:18:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:01.581 13:18:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:01.581 13:18:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:01.581 13:18:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:01.581 13:18:43 -- common/autotest_common.sh@1457 -- # uname 00:05:01.581 13:18:43 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:01.581 13:18:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:01.581 13:18:43 -- common/autotest_common.sh@1477 -- # uname 00:05:01.581 13:18:43 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:01.581 13:18:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:01.581 13:18:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:01.581 lcov: LCOV version 1.15 00:05:01.581 13:18:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:16.477 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:16.477 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:31.370 13:19:12 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:31.370 13:19:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.370 13:19:12 -- common/autotest_common.sh@10 -- # set +x 00:05:31.370 13:19:12 -- spdk/autotest.sh@78 -- # rm -f 00:05:31.370 13:19:12 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.890 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:31.890 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:31.890 13:19:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:31.890 13:19:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:31.890 13:19:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:31.890 13:19:13 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:31.890 13:19:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:31.890 13:19:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:31.890 13:19:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:31.890 13:19:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:31.890 13:19:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:31.890 13:19:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:31.890 13:19:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:05:31.890 13:19:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:05:31.890 13:19:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:31.890 13:19:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:31.890 13:19:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:31.890 13:19:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:05:31.890 13:19:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:05:31.890 13:19:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:31.890 13:19:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:31.890 13:19:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:31.890 13:19:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:31.890 13:19:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:31.890 13:19:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:31.890 13:19:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:31.890 13:19:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:31.890 13:19:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.890 13:19:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.890 13:19:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:31.890 13:19:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:31.890 13:19:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:31.890 No valid GPT data, bailing 00:05:31.890 13:19:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:31.890 13:19:13 -- scripts/common.sh@394 -- # pt= 00:05:31.890 13:19:13 -- scripts/common.sh@395 -- # return 1 00:05:31.890 13:19:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:31.890 1+0 records in 00:05:31.890 1+0 records out 00:05:31.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00712535 s, 147 MB/s 00:05:31.890 13:19:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.890 13:19:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.890 13:19:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:05:31.890 13:19:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:05:31.890 13:19:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:05:32.150 No valid GPT data, bailing 00:05:32.150 13:19:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:32.150 13:19:13 -- scripts/common.sh@394 -- # pt= 00:05:32.150 13:19:13 -- scripts/common.sh@395 -- # return 1 00:05:32.150 13:19:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:05:32.150 1+0 records in 00:05:32.150 1+0 records out 00:05:32.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00330744 s, 317 MB/s 00:05:32.150 13:19:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.150 13:19:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.150 13:19:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:05:32.150 13:19:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:05:32.151 13:19:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:05:32.151 No valid GPT data, bailing 00:05:32.151 13:19:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:32.151 13:19:13 -- scripts/common.sh@394 -- # pt= 00:05:32.151 13:19:13 -- scripts/common.sh@395 -- # return 1 00:05:32.151 13:19:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:05:32.151 1+0 records in 00:05:32.151 1+0 records out 00:05:32.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456475 s, 230 MB/s 00:05:32.151 13:19:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.151 13:19:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.151 13:19:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:32.151 13:19:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:32.151 13:19:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:32.151 No valid GPT data, bailing 00:05:32.151 13:19:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:32.151 13:19:13 -- scripts/common.sh@394 -- # pt= 00:05:32.151 13:19:13 -- scripts/common.sh@395 -- # return 1 00:05:32.151 13:19:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:32.151 1+0 records in 00:05:32.151 1+0 records out 00:05:32.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00346296 s, 303 MB/s 00:05:32.151 13:19:13 -- spdk/autotest.sh@105 -- # sync 00:05:32.151 13:19:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:32.151 13:19:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:32.151 13:19:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:35.444 13:19:16 -- spdk/autotest.sh@111 -- # uname -s 00:05:35.444 13:19:16 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:35.444 13:19:16 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:35.444 13:19:16 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:36.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.014 Hugepages 00:05:36.014 node hugesize free / total 00:05:36.014 node0 1048576kB 0 / 0 00:05:36.014 node0 2048kB 0 / 0 00:05:36.014 00:05:36.014 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.276 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:36.276 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:36.276 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:36.276 13:19:17 -- spdk/autotest.sh@117 -- # uname -s 00:05:36.276 13:19:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:36.276 13:19:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:36.276 13:19:17 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.220 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.220 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.479 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.480 13:19:18 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:38.420 13:19:20 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:38.420 13:19:20 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:38.420 13:19:20 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:38.420 13:19:20 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:38.420 13:19:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:38.420 13:19:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:38.420 13:19:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.420 13:19:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:38.420 13:19:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:38.681 13:19:20 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:38.681 13:19:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:38.681 13:19:20 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.941 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.200 Waiting for block devices as requested 00:05:39.200 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:39.200 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:39.461 13:19:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:39.461 13:19:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:39.461 13:19:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:39.461 13:19:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:39.461 13:19:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:39.461 13:19:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:39.461 13:19:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:39.461 13:19:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:39.461 13:19:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:39.461 13:19:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:39.461 13:19:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:39.461 13:19:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:39.461 13:19:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:39.461 13:19:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:39.461 13:19:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:39.461 13:19:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:39.461 13:19:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:39.461 13:19:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:39.461 13:19:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:39.461 13:19:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:39.461 13:19:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:39.461 13:19:20 -- common/autotest_common.sh@1543 -- # continue 00:05:39.461 13:19:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:39.461 13:19:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:39.461 13:19:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:39.461 13:19:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:39.461 13:19:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:39.461 13:19:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:39.461 13:19:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:39.461 13:19:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:39.461 13:19:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:39.461 13:19:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:39.461 13:19:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:39.461 13:19:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:39.461 13:19:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:39.461 13:19:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:39.461 13:19:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:39.461 13:19:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:39.461 13:19:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:39.461 13:19:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:39.461 13:19:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:39.461 13:19:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:39.461 13:19:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:39.461 13:19:20 -- common/autotest_common.sh@1543 -- # continue 00:05:39.461 13:19:20 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:39.461 13:19:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.461 13:19:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.461 13:19:21 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:39.461 13:19:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.461 13:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:39.461 13:19:21 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.401 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.401 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.401 13:19:22 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:40.401 13:19:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.401 13:19:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.659 13:19:22 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:40.659 13:19:22 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:40.659 13:19:22 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:40.659 13:19:22 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:40.659 13:19:22 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:40.659 13:19:22 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:40.659 13:19:22 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:40.659 13:19:22 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:40.659 13:19:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:40.659 13:19:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:40.659 13:19:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.659 13:19:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:40.659 13:19:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:40.659 13:19:22 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:40.659 13:19:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:40.659 13:19:22 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:40.659 13:19:22 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:40.659 13:19:22 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:40.660 13:19:22 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:40.660 13:19:22 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:40.660 13:19:22 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:40.660 13:19:22 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:40.660 13:19:22 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:40.660 13:19:22 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:40.660 13:19:22 -- common/autotest_common.sh@1572 -- # return 0 00:05:40.660 13:19:22 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:40.660 13:19:22 -- common/autotest_common.sh@1580 -- # return 0 00:05:40.660 13:19:22 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:40.660 13:19:22 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:40.660 13:19:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:40.660 13:19:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:40.660 13:19:22 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:40.660 13:19:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.660 13:19:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.660 13:19:22 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:40.660 13:19:22 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:40.660 13:19:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.660 13:19:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.660 13:19:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.660 ************************************ 00:05:40.660 START TEST env 00:05:40.660 ************************************ 00:05:40.660 13:19:22 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:40.919 * Looking for test storage... 00:05:40.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:40.919 13:19:22 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.919 13:19:22 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.919 13:19:22 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.919 13:19:22 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.919 13:19:22 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.919 13:19:22 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.919 13:19:22 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.919 13:19:22 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.919 13:19:22 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.919 13:19:22 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.919 13:19:22 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.919 13:19:22 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.919 13:19:22 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.919 13:19:22 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.919 13:19:22 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.919 13:19:22 env -- scripts/common.sh@344 -- # case "$op" in 00:05:40.919 13:19:22 env -- scripts/common.sh@345 -- # : 1 00:05:40.919 13:19:22 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.919 13:19:22 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.919 13:19:22 env -- scripts/common.sh@365 -- # decimal 1 00:05:40.919 13:19:22 env -- scripts/common.sh@353 -- # local d=1 00:05:40.919 13:19:22 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.919 13:19:22 env -- scripts/common.sh@355 -- # echo 1 00:05:40.919 13:19:22 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.919 13:19:22 env -- scripts/common.sh@366 -- # decimal 2 00:05:40.919 13:19:22 env -- scripts/common.sh@353 -- # local d=2 00:05:40.919 13:19:22 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.919 13:19:22 env -- scripts/common.sh@355 -- # echo 2 00:05:40.919 13:19:22 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.919 13:19:22 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.919 13:19:22 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.919 13:19:22 env -- scripts/common.sh@368 -- # return 0 00:05:40.919 13:19:22 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.919 13:19:22 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.919 --rc genhtml_branch_coverage=1 00:05:40.919 --rc genhtml_function_coverage=1 00:05:40.919 --rc genhtml_legend=1 00:05:40.919 --rc geninfo_all_blocks=1 00:05:40.919 --rc geninfo_unexecuted_blocks=1 00:05:40.919 00:05:40.919 ' 00:05:40.919 13:19:22 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.919 --rc genhtml_branch_coverage=1 00:05:40.919 --rc genhtml_function_coverage=1 00:05:40.919 --rc genhtml_legend=1 00:05:40.919 --rc geninfo_all_blocks=1 00:05:40.919 --rc geninfo_unexecuted_blocks=1 00:05:40.919 00:05:40.919 ' 00:05:40.919 13:19:22 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.919 --rc genhtml_branch_coverage=1 00:05:40.919 --rc genhtml_function_coverage=1 00:05:40.919 --rc genhtml_legend=1 00:05:40.919 --rc geninfo_all_blocks=1 00:05:40.919 --rc geninfo_unexecuted_blocks=1 00:05:40.919 00:05:40.919 ' 00:05:40.919 13:19:22 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.919 --rc genhtml_branch_coverage=1 00:05:40.919 --rc genhtml_function_coverage=1 00:05:40.919 --rc genhtml_legend=1 00:05:40.919 --rc geninfo_all_blocks=1 00:05:40.919 --rc geninfo_unexecuted_blocks=1 00:05:40.919 00:05:40.919 ' 00:05:40.919 13:19:22 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:40.919 13:19:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.919 13:19:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.919 13:19:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.919 ************************************ 00:05:40.919 START TEST env_memory 00:05:40.919 ************************************ 00:05:40.919 13:19:22 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:40.919 00:05:40.919 00:05:40.919 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.919 http://cunit.sourceforge.net/ 00:05:40.919 00:05:40.919 00:05:40.919 Suite: memory 00:05:40.919 Test: alloc and free memory map ...[2024-11-20 13:19:22.549341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:40.919 passed 00:05:41.179 Test: mem map translation ...[2024-11-20 13:19:22.590285] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:41.179 [2024-11-20 13:19:22.590327] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:41.179 [2024-11-20 13:19:22.590383] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:41.179 [2024-11-20 13:19:22.590402] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:41.179 passed 00:05:41.179 Test: mem map registration ...[2024-11-20 13:19:22.652280] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:41.179 [2024-11-20 13:19:22.652315] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:41.179 passed 00:05:41.179 Test: mem map adjacent registrations ...passed 00:05:41.179 00:05:41.179 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.179 suites 1 1 n/a 0 0 00:05:41.179 tests 4 4 4 0 0 00:05:41.179 asserts 152 152 152 0 n/a 00:05:41.179 00:05:41.179 Elapsed time = 0.225 seconds 00:05:41.179 00:05:41.179 real 0m0.278s 00:05:41.179 user 0m0.237s 00:05:41.179 sys 0m0.030s 00:05:41.179 13:19:22 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.179 13:19:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:41.179 ************************************ 00:05:41.179 END TEST env_memory 00:05:41.179 ************************************ 00:05:41.179 13:19:22 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:41.179 13:19:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.179 13:19:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.179 13:19:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.179 ************************************ 00:05:41.179 START TEST env_vtophys 00:05:41.180 ************************************ 00:05:41.180 13:19:22 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:41.440 EAL: lib.eal log level changed from notice to debug 00:05:41.440 EAL: Detected lcore 0 as core 0 on socket 0 00:05:41.440 EAL: Detected lcore 1 as core 0 on socket 0 00:05:41.440 EAL: Detected lcore 2 as core 0 on socket 0 00:05:41.440 EAL: Detected lcore 3 as core 0 on socket 0 00:05:41.440 EAL: Detected lcore 4 as core 0 on socket 0 00:05:41.440 EAL: Detected lcore 5 as core 0 on socket 0 00:05:41.440 EAL: Detected lcore 6 as core 0 on socket 0 00:05:41.441 EAL: Detected lcore 7 as core 0 on socket 0 00:05:41.441 EAL: Detected lcore 8 as core 0 on socket 0 00:05:41.441 EAL: Detected lcore 9 as core 0 on socket 0 00:05:41.441 EAL: Maximum logical cores by configuration: 128 00:05:41.441 EAL: Detected CPU lcores: 10 00:05:41.441 EAL: Detected NUMA nodes: 1 00:05:41.441 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:41.441 EAL: Detected shared linkage of DPDK 00:05:41.441 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:41.441 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:41.441 EAL: Registered [vdev] bus. 00:05:41.441 EAL: bus.vdev log level changed from disabled to notice 00:05:41.441 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:41.441 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:41.441 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:41.441 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:41.441 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:41.441 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:41.441 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:41.441 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:41.441 EAL: No shared files mode enabled, IPC will be disabled 00:05:41.441 EAL: No shared files mode enabled, IPC is disabled 00:05:41.441 EAL: Selected IOVA mode 'PA' 00:05:41.441 EAL: Probing VFIO support... 00:05:41.441 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:41.441 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:41.441 EAL: Ask a virtual area of 0x2e000 bytes 00:05:41.441 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:41.441 EAL: Setting up physically contiguous memory... 00:05:41.441 EAL: Setting maximum number of open files to 524288 00:05:41.441 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:41.441 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:41.441 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.441 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:41.441 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.441 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.441 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:41.441 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:41.441 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.441 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:41.441 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.441 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.441 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:41.441 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:41.441 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.441 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:41.441 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.441 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.441 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:41.441 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:41.441 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.441 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:41.441 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.441 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.441 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:41.441 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:41.441 EAL: Hugepages will be freed exactly as allocated. 00:05:41.441 EAL: No shared files mode enabled, IPC is disabled 00:05:41.441 EAL: No shared files mode enabled, IPC is disabled 00:05:41.441 EAL: TSC frequency is ~2290000 KHz 00:05:41.441 EAL: Main lcore 0 is ready (tid=7fa9d60a7a40;cpuset=[0]) 00:05:41.441 EAL: Trying to obtain current memory policy. 00:05:41.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.441 EAL: Restoring previous memory policy: 0 00:05:41.441 EAL: request: mp_malloc_sync 00:05:41.441 EAL: No shared files mode enabled, IPC is disabled 00:05:41.441 EAL: Heap on socket 0 was expanded by 2MB 00:05:41.441 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:41.441 EAL: No shared files mode enabled, IPC is disabled 00:05:41.441 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:41.441 EAL: Mem event callback 'spdk:(nil)' registered 00:05:41.441 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:41.441 00:05:41.441 00:05:41.441 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.441 http://cunit.sourceforge.net/ 00:05:41.441 00:05:41.441 00:05:41.441 Suite: components_suite 00:05:41.703 Test: vtophys_malloc_test ...passed 00:05:41.703 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:41.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.703 EAL: Restoring previous memory policy: 4 00:05:41.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.703 EAL: request: mp_malloc_sync 00:05:41.703 EAL: No shared files mode enabled, IPC is disabled 00:05:41.703 EAL: Heap on socket 0 was expanded by 4MB 00:05:41.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.703 EAL: request: mp_malloc_sync 00:05:41.703 EAL: No shared files mode enabled, IPC is disabled 00:05:41.703 EAL: Heap on socket 0 was shrunk by 4MB 00:05:41.703 EAL: Trying to obtain current memory policy. 00:05:41.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.703 EAL: Restoring previous memory policy: 4 00:05:41.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.703 EAL: request: mp_malloc_sync 00:05:41.703 EAL: No shared files mode enabled, IPC is disabled 00:05:41.703 EAL: Heap on socket 0 was expanded by 6MB 00:05:41.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.703 EAL: request: mp_malloc_sync 00:05:41.703 EAL: No shared files mode enabled, IPC is disabled 00:05:41.703 EAL: Heap on socket 0 was shrunk by 6MB 00:05:41.703 EAL: Trying to obtain current memory policy. 00:05:41.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.703 EAL: Restoring previous memory policy: 4 00:05:41.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.703 EAL: request: mp_malloc_sync 00:05:41.703 EAL: No shared files mode enabled, IPC is disabled 00:05:41.703 EAL: Heap on socket 0 was expanded by 10MB 00:05:41.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.703 EAL: request: mp_malloc_sync 00:05:41.703 EAL: No shared files mode enabled, IPC is disabled 00:05:41.703 EAL: Heap on socket 0 was shrunk by 10MB 00:05:41.703 EAL: Trying to obtain current memory policy. 00:05:41.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.703 EAL: Restoring previous memory policy: 4 00:05:41.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.703 EAL: request: mp_malloc_sync 00:05:41.703 EAL: No shared files mode enabled, IPC is disabled 00:05:41.703 EAL: Heap on socket 0 was expanded by 18MB 00:05:41.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.703 EAL: request: mp_malloc_sync 00:05:41.703 EAL: No shared files mode enabled, IPC is disabled 00:05:41.703 EAL: Heap on socket 0 was shrunk by 18MB 00:05:41.703 EAL: Trying to obtain current memory policy. 00:05:41.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.703 EAL: Restoring previous memory policy: 4 00:05:41.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.703 EAL: request: mp_malloc_sync 00:05:41.703 EAL: No shared files mode enabled, IPC is disabled 00:05:41.703 EAL: Heap on socket 0 was expanded by 34MB 00:05:41.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.703 EAL: request: mp_malloc_sync 00:05:41.703 EAL: No shared files mode enabled, IPC is disabled 00:05:41.703 EAL: Heap on socket 0 was shrunk by 34MB 00:05:41.703 EAL: Trying to obtain current memory policy. 00:05:41.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.963 EAL: Restoring previous memory policy: 4 00:05:41.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.963 EAL: request: mp_malloc_sync 00:05:41.963 EAL: No shared files mode enabled, IPC is disabled 00:05:41.963 EAL: Heap on socket 0 was expanded by 66MB 00:05:41.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.963 EAL: request: mp_malloc_sync 00:05:41.963 EAL: No shared files mode enabled, IPC is disabled 00:05:41.963 EAL: Heap on socket 0 was shrunk by 66MB 00:05:41.963 EAL: Trying to obtain current memory policy. 00:05:41.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.963 EAL: Restoring previous memory policy: 4 00:05:41.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.963 EAL: request: mp_malloc_sync 00:05:41.963 EAL: No shared files mode enabled, IPC is disabled 00:05:41.963 EAL: Heap on socket 0 was expanded by 130MB 00:05:41.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.963 EAL: request: mp_malloc_sync 00:05:41.963 EAL: No shared files mode enabled, IPC is disabled 00:05:41.963 EAL: Heap on socket 0 was shrunk by 130MB 00:05:41.963 EAL: Trying to obtain current memory policy. 00:05:41.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.963 EAL: Restoring previous memory policy: 4 00:05:41.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.963 EAL: request: mp_malloc_sync 00:05:41.963 EAL: No shared files mode enabled, IPC is disabled 00:05:41.963 EAL: Heap on socket 0 was expanded by 258MB 00:05:41.963 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.963 EAL: request: mp_malloc_sync 00:05:41.963 EAL: No shared files mode enabled, IPC is disabled 00:05:41.963 EAL: Heap on socket 0 was shrunk by 258MB 00:05:41.963 EAL: Trying to obtain current memory policy. 00:05:41.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.223 EAL: Restoring previous memory policy: 4 00:05:42.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.223 EAL: request: mp_malloc_sync 00:05:42.223 EAL: No shared files mode enabled, IPC is disabled 00:05:42.223 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.223 EAL: request: mp_malloc_sync 00:05:42.223 EAL: No shared files mode enabled, IPC is disabled 00:05:42.223 EAL: Heap on socket 0 was shrunk by 514MB 00:05:42.223 EAL: Trying to obtain current memory policy. 00:05:42.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.483 EAL: Restoring previous memory policy: 4 00:05:42.483 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.483 EAL: request: mp_malloc_sync 00:05:42.483 EAL: No shared files mode enabled, IPC is disabled 00:05:42.483 EAL: Heap on socket 0 was expanded by 1026MB 00:05:42.741 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.741 passed 00:05:42.741 00:05:42.741 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.741 suites 1 1 n/a 0 0 00:05:42.741 tests 2 2 2 0 0 00:05:42.741 asserts 5505 5505 5505 0 n/a 00:05:42.741 00:05:42.741 Elapsed time = 1.343 seconds 00:05:42.741 EAL: request: mp_malloc_sync 00:05:42.741 EAL: No shared files mode enabled, IPC is disabled 00:05:42.741 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:42.741 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.741 EAL: request: mp_malloc_sync 00:05:42.741 EAL: No shared files mode enabled, IPC is disabled 00:05:42.741 EAL: Heap on socket 0 was shrunk by 2MB 00:05:42.741 EAL: No shared files mode enabled, IPC is disabled 00:05:42.741 EAL: No shared files mode enabled, IPC is disabled 00:05:42.741 EAL: No shared files mode enabled, IPC is disabled 00:05:43.001 00:05:43.001 real 0m1.599s 00:05:43.001 user 0m0.773s 00:05:43.001 sys 0m0.691s 00:05:43.001 13:19:24 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.001 13:19:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:43.001 ************************************ 00:05:43.001 END TEST env_vtophys 00:05:43.001 ************************************ 00:05:43.001 13:19:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.001 13:19:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.001 13:19:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.001 13:19:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.001 ************************************ 00:05:43.001 START TEST env_pci 00:05:43.001 ************************************ 00:05:43.001 13:19:24 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.001 00:05:43.001 00:05:43.001 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.001 http://cunit.sourceforge.net/ 00:05:43.001 00:05:43.001 00:05:43.001 Suite: pci 00:05:43.001 Test: pci_hook ...[2024-11-20 13:19:24.524207] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68722 has claimed it 00:05:43.001 passed 00:05:43.001 00:05:43.001 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.001 suites 1 1 n/a 0 0 00:05:43.001 tests 1 1 1 0 0 00:05:43.001 asserts 25 25 25 0 n/a 00:05:43.001 00:05:43.001 Elapsed time = 0.008 secondsEAL: Cannot find device (10000:00:01.0) 00:05:43.001 EAL: Failed to attach device on primary process 00:05:43.001 00:05:43.001 00:05:43.001 real 0m0.088s 00:05:43.001 user 0m0.037s 00:05:43.001 sys 0m0.049s 00:05:43.001 13:19:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.001 13:19:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:43.001 ************************************ 00:05:43.001 END TEST env_pci 00:05:43.001 ************************************ 00:05:43.001 13:19:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:43.001 13:19:24 env -- env/env.sh@15 -- # uname 00:05:43.001 13:19:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:43.001 13:19:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:43.001 13:19:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.001 13:19:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:43.001 13:19:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.002 13:19:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.002 ************************************ 00:05:43.002 START TEST env_dpdk_post_init 00:05:43.002 ************************************ 00:05:43.002 13:19:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.262 EAL: Detected CPU lcores: 10 00:05:43.262 EAL: Detected NUMA nodes: 1 00:05:43.262 EAL: Detected shared linkage of DPDK 00:05:43.262 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.262 EAL: Selected IOVA mode 'PA' 00:05:43.262 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.262 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:43.262 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:43.262 Starting DPDK initialization... 00:05:43.262 Starting SPDK post initialization... 00:05:43.262 SPDK NVMe probe 00:05:43.262 Attaching to 0000:00:10.0 00:05:43.262 Attaching to 0000:00:11.0 00:05:43.262 Attached to 0000:00:10.0 00:05:43.262 Attached to 0000:00:11.0 00:05:43.262 Cleaning up... 00:05:43.262 00:05:43.262 real 0m0.241s 00:05:43.262 user 0m0.070s 00:05:43.262 sys 0m0.073s 00:05:43.262 13:19:24 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.262 13:19:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.262 ************************************ 00:05:43.262 END TEST env_dpdk_post_init 00:05:43.262 ************************************ 00:05:43.524 13:19:24 env -- env/env.sh@26 -- # uname 00:05:43.524 13:19:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:43.524 13:19:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.524 13:19:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.524 13:19:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.524 13:19:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.524 ************************************ 00:05:43.524 START TEST env_mem_callbacks 00:05:43.524 ************************************ 00:05:43.524 13:19:24 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.524 EAL: Detected CPU lcores: 10 00:05:43.524 EAL: Detected NUMA nodes: 1 00:05:43.524 EAL: Detected shared linkage of DPDK 00:05:43.524 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.524 EAL: Selected IOVA mode 'PA' 00:05:43.524 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.524 00:05:43.524 00:05:43.524 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.524 http://cunit.sourceforge.net/ 00:05:43.524 00:05:43.524 00:05:43.524 Suite: memory 00:05:43.524 Test: test ... 00:05:43.524 register 0x200000200000 2097152 00:05:43.524 malloc 3145728 00:05:43.524 register 0x200000400000 4194304 00:05:43.524 buf 0x200000500000 len 3145728 PASSED 00:05:43.524 malloc 64 00:05:43.524 buf 0x2000004fff40 len 64 PASSED 00:05:43.524 malloc 4194304 00:05:43.524 register 0x200000800000 6291456 00:05:43.524 buf 0x200000a00000 len 4194304 PASSED 00:05:43.524 free 0x200000500000 3145728 00:05:43.524 free 0x2000004fff40 64 00:05:43.524 unregister 0x200000400000 4194304 PASSED 00:05:43.524 free 0x200000a00000 4194304 00:05:43.524 unregister 0x200000800000 6291456 PASSED 00:05:43.524 malloc 8388608 00:05:43.524 register 0x200000400000 10485760 00:05:43.524 buf 0x200000600000 len 8388608 PASSED 00:05:43.524 free 0x200000600000 8388608 00:05:43.524 unregister 0x200000400000 10485760 PASSED 00:05:43.524 passed 00:05:43.524 00:05:43.524 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.524 suites 1 1 n/a 0 0 00:05:43.524 tests 1 1 1 0 0 00:05:43.524 asserts 15 15 15 0 n/a 00:05:43.524 00:05:43.524 Elapsed time = 0.011 seconds 00:05:43.524 00:05:43.524 real 0m0.179s 00:05:43.524 user 0m0.030s 00:05:43.524 sys 0m0.049s 00:05:43.524 13:19:25 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.524 13:19:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:43.524 ************************************ 00:05:43.524 END TEST env_mem_callbacks 00:05:43.524 ************************************ 00:05:43.784 00:05:43.784 real 0m2.959s 00:05:43.784 user 0m1.375s 00:05:43.784 sys 0m1.246s 00:05:43.784 13:19:25 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.784 13:19:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.784 ************************************ 00:05:43.784 END TEST env 00:05:43.784 ************************************ 00:05:43.784 13:19:25 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:43.784 13:19:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.784 13:19:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.784 13:19:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.784 ************************************ 00:05:43.784 START TEST rpc 00:05:43.784 ************************************ 00:05:43.784 13:19:25 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:43.784 * Looking for test storage... 00:05:43.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.784 13:19:25 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.784 13:19:25 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.784 13:19:25 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.045 13:19:25 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.045 13:19:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.045 13:19:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.045 13:19:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.045 13:19:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.045 13:19:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.045 13:19:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.045 13:19:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.045 13:19:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.045 13:19:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.045 13:19:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.045 13:19:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.045 13:19:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.045 13:19:25 rpc -- scripts/common.sh@345 -- # : 1 00:05:44.045 13:19:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.045 13:19:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.045 13:19:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.045 13:19:25 rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.046 13:19:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.046 13:19:25 rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.046 13:19:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.046 13:19:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.046 13:19:25 rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.046 13:19:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.046 13:19:25 rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.046 13:19:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.046 13:19:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.046 13:19:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.046 13:19:25 rpc -- scripts/common.sh@368 -- # return 0 00:05:44.046 13:19:25 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.046 13:19:25 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.046 --rc genhtml_branch_coverage=1 00:05:44.046 --rc genhtml_function_coverage=1 00:05:44.046 --rc genhtml_legend=1 00:05:44.046 --rc geninfo_all_blocks=1 00:05:44.046 --rc geninfo_unexecuted_blocks=1 00:05:44.046 00:05:44.046 ' 00:05:44.046 13:19:25 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.046 --rc genhtml_branch_coverage=1 00:05:44.046 --rc genhtml_function_coverage=1 00:05:44.046 --rc genhtml_legend=1 00:05:44.046 --rc geninfo_all_blocks=1 00:05:44.046 --rc geninfo_unexecuted_blocks=1 00:05:44.046 00:05:44.046 ' 00:05:44.046 13:19:25 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.046 --rc genhtml_branch_coverage=1 00:05:44.046 --rc genhtml_function_coverage=1 00:05:44.046 --rc genhtml_legend=1 00:05:44.046 --rc geninfo_all_blocks=1 00:05:44.046 --rc geninfo_unexecuted_blocks=1 00:05:44.046 00:05:44.046 ' 00:05:44.046 13:19:25 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.046 --rc genhtml_branch_coverage=1 00:05:44.046 --rc genhtml_function_coverage=1 00:05:44.046 --rc genhtml_legend=1 00:05:44.046 --rc geninfo_all_blocks=1 00:05:44.046 --rc geninfo_unexecuted_blocks=1 00:05:44.046 00:05:44.046 ' 00:05:44.046 13:19:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68849 00:05:44.046 13:19:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:44.046 13:19:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.046 13:19:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68849 00:05:44.046 13:19:25 rpc -- common/autotest_common.sh@835 -- # '[' -z 68849 ']' 00:05:44.046 13:19:25 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.046 13:19:25 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.046 13:19:25 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.046 13:19:25 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.046 13:19:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.046 [2024-11-20 13:19:25.596146] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:05:44.046 [2024-11-20 13:19:25.596287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68849 ] 00:05:44.305 [2024-11-20 13:19:25.749777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.305 [2024-11-20 13:19:25.775494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.305 [2024-11-20 13:19:25.775560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68849' to capture a snapshot of events at runtime. 00:05:44.305 [2024-11-20 13:19:25.775571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.305 [2024-11-20 13:19:25.775598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.305 [2024-11-20 13:19:25.775609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68849 for offline analysis/debug. 00:05:44.305 [2024-11-20 13:19:25.776025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.876 13:19:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.876 13:19:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:44.876 13:19:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:44.876 13:19:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:44.876 13:19:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:44.876 13:19:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:44.876 13:19:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.876 13:19:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.876 13:19:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.876 ************************************ 00:05:44.876 START TEST rpc_integrity 00:05:44.876 ************************************ 00:05:44.876 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:44.876 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:44.876 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.876 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.876 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.876 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:44.876 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:44.877 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:44.877 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.877 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.877 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.877 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.877 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:44.877 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:44.877 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.877 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.877 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.877 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.877 { 00:05:44.877 "name": "Malloc0", 00:05:44.877 "aliases": [ 00:05:44.877 "e0f5c443-2a9c-4435-8875-cb7151406bba" 00:05:44.877 ], 00:05:44.877 "product_name": "Malloc disk", 00:05:44.877 "block_size": 512, 00:05:44.877 "num_blocks": 16384, 00:05:44.877 "uuid": "e0f5c443-2a9c-4435-8875-cb7151406bba", 00:05:44.877 "assigned_rate_limits": { 00:05:44.877 "rw_ios_per_sec": 0, 00:05:44.877 "rw_mbytes_per_sec": 0, 00:05:44.877 "r_mbytes_per_sec": 0, 00:05:44.877 "w_mbytes_per_sec": 0 00:05:44.877 }, 00:05:44.877 "claimed": false, 00:05:44.877 "zoned": false, 00:05:44.877 "supported_io_types": { 00:05:44.877 "read": true, 00:05:44.877 "write": true, 00:05:44.877 "unmap": true, 00:05:44.877 "flush": true, 00:05:44.877 "reset": true, 00:05:44.877 "nvme_admin": false, 00:05:44.877 "nvme_io": false, 00:05:44.877 "nvme_io_md": false, 00:05:44.877 "write_zeroes": true, 00:05:44.877 "zcopy": true, 00:05:44.877 "get_zone_info": false, 00:05:44.877 "zone_management": false, 00:05:44.877 "zone_append": false, 00:05:44.877 "compare": false, 00:05:44.877 "compare_and_write": false, 00:05:44.877 "abort": true, 00:05:44.877 "seek_hole": false, 00:05:44.877 "seek_data": false, 00:05:44.877 "copy": true, 00:05:44.877 "nvme_iov_md": false 00:05:44.877 }, 00:05:44.877 "memory_domains": [ 00:05:44.877 { 00:05:44.877 "dma_device_id": "system", 00:05:44.877 "dma_device_type": 1 00:05:44.877 }, 00:05:44.877 { 00:05:44.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.877 "dma_device_type": 2 00:05:44.877 } 00:05:44.877 ], 00:05:44.877 "driver_specific": {} 00:05:44.877 } 00:05:44.877 ]' 00:05:44.877 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.137 [2024-11-20 13:19:26.569547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.137 [2024-11-20 13:19:26.569685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.137 [2024-11-20 13:19:26.569752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:05:45.137 [2024-11-20 13:19:26.569772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.137 [2024-11-20 13:19:26.572273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.137 [2024-11-20 13:19:26.572316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.137 Passthru0 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.137 { 00:05:45.137 "name": "Malloc0", 00:05:45.137 "aliases": [ 00:05:45.137 "e0f5c443-2a9c-4435-8875-cb7151406bba" 00:05:45.137 ], 00:05:45.137 "product_name": "Malloc disk", 00:05:45.137 "block_size": 512, 00:05:45.137 "num_blocks": 16384, 00:05:45.137 "uuid": "e0f5c443-2a9c-4435-8875-cb7151406bba", 00:05:45.137 "assigned_rate_limits": { 00:05:45.137 "rw_ios_per_sec": 0, 00:05:45.137 "rw_mbytes_per_sec": 0, 00:05:45.137 "r_mbytes_per_sec": 0, 00:05:45.137 "w_mbytes_per_sec": 0 00:05:45.137 }, 00:05:45.137 "claimed": true, 00:05:45.137 "claim_type": "exclusive_write", 00:05:45.137 "zoned": false, 00:05:45.137 "supported_io_types": { 00:05:45.137 "read": true, 00:05:45.137 "write": true, 00:05:45.137 "unmap": true, 00:05:45.137 "flush": true, 00:05:45.137 "reset": true, 00:05:45.137 "nvme_admin": false, 00:05:45.137 "nvme_io": false, 00:05:45.137 "nvme_io_md": false, 00:05:45.137 "write_zeroes": true, 00:05:45.137 "zcopy": true, 00:05:45.137 "get_zone_info": false, 00:05:45.137 "zone_management": false, 00:05:45.137 "zone_append": false, 00:05:45.137 "compare": false, 00:05:45.137 "compare_and_write": false, 00:05:45.137 "abort": true, 00:05:45.137 "seek_hole": false, 00:05:45.137 "seek_data": false, 00:05:45.137 "copy": true, 00:05:45.137 "nvme_iov_md": false 00:05:45.137 }, 00:05:45.137 "memory_domains": [ 00:05:45.137 { 00:05:45.137 "dma_device_id": "system", 00:05:45.137 "dma_device_type": 1 00:05:45.137 }, 00:05:45.137 { 00:05:45.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.137 "dma_device_type": 2 00:05:45.137 } 00:05:45.137 ], 00:05:45.137 "driver_specific": {} 00:05:45.137 }, 00:05:45.137 { 00:05:45.137 "name": "Passthru0", 00:05:45.137 "aliases": [ 00:05:45.137 "a08ef87e-6cfb-5bc7-89a0-41f09c3aa673" 00:05:45.137 ], 00:05:45.137 "product_name": "passthru", 00:05:45.137 "block_size": 512, 00:05:45.137 "num_blocks": 16384, 00:05:45.137 "uuid": "a08ef87e-6cfb-5bc7-89a0-41f09c3aa673", 00:05:45.137 "assigned_rate_limits": { 00:05:45.137 "rw_ios_per_sec": 0, 00:05:45.137 "rw_mbytes_per_sec": 0, 00:05:45.137 "r_mbytes_per_sec": 0, 00:05:45.137 "w_mbytes_per_sec": 0 00:05:45.137 }, 00:05:45.137 "claimed": false, 00:05:45.137 "zoned": false, 00:05:45.137 "supported_io_types": { 00:05:45.137 "read": true, 00:05:45.137 "write": true, 00:05:45.137 "unmap": true, 00:05:45.137 "flush": true, 00:05:45.137 "reset": true, 00:05:45.137 "nvme_admin": false, 00:05:45.137 "nvme_io": false, 00:05:45.137 "nvme_io_md": false, 00:05:45.137 "write_zeroes": true, 00:05:45.137 "zcopy": true, 00:05:45.137 "get_zone_info": false, 00:05:45.137 "zone_management": false, 00:05:45.137 "zone_append": false, 00:05:45.137 "compare": false, 00:05:45.137 "compare_and_write": false, 00:05:45.137 "abort": true, 00:05:45.137 "seek_hole": false, 00:05:45.137 "seek_data": false, 00:05:45.137 "copy": true, 00:05:45.137 "nvme_iov_md": false 00:05:45.137 }, 00:05:45.137 "memory_domains": [ 00:05:45.137 { 00:05:45.137 "dma_device_id": "system", 00:05:45.137 "dma_device_type": 1 00:05:45.137 }, 00:05:45.137 { 00:05:45.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.137 "dma_device_type": 2 00:05:45.137 } 00:05:45.137 ], 00:05:45.137 "driver_specific": { 00:05:45.137 "passthru": { 00:05:45.137 "name": "Passthru0", 00:05:45.137 "base_bdev_name": "Malloc0" 00:05:45.137 } 00:05:45.137 } 00:05:45.137 } 00:05:45.137 ]' 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.137 13:19:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.137 00:05:45.137 real 0m0.319s 00:05:45.137 user 0m0.188s 00:05:45.137 sys 0m0.059s 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.137 13:19:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.137 ************************************ 00:05:45.137 END TEST rpc_integrity 00:05:45.137 ************************************ 00:05:45.137 13:19:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.137 13:19:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.137 13:19:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.137 13:19:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.137 ************************************ 00:05:45.137 START TEST rpc_plugins 00:05:45.137 ************************************ 00:05:45.138 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:45.138 13:19:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.138 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.138 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.397 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.397 13:19:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.397 13:19:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.397 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.397 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.397 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.397 13:19:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.397 { 00:05:45.397 "name": "Malloc1", 00:05:45.397 "aliases": [ 00:05:45.397 "006a4cbe-d435-48e5-b679-934d4ed2c78a" 00:05:45.397 ], 00:05:45.397 "product_name": "Malloc disk", 00:05:45.397 "block_size": 4096, 00:05:45.397 "num_blocks": 256, 00:05:45.397 "uuid": "006a4cbe-d435-48e5-b679-934d4ed2c78a", 00:05:45.397 "assigned_rate_limits": { 00:05:45.397 "rw_ios_per_sec": 0, 00:05:45.397 "rw_mbytes_per_sec": 0, 00:05:45.397 "r_mbytes_per_sec": 0, 00:05:45.397 "w_mbytes_per_sec": 0 00:05:45.397 }, 00:05:45.397 "claimed": false, 00:05:45.397 "zoned": false, 00:05:45.397 "supported_io_types": { 00:05:45.397 "read": true, 00:05:45.397 "write": true, 00:05:45.397 "unmap": true, 00:05:45.397 "flush": true, 00:05:45.397 "reset": true, 00:05:45.397 "nvme_admin": false, 00:05:45.397 "nvme_io": false, 00:05:45.397 "nvme_io_md": false, 00:05:45.397 "write_zeroes": true, 00:05:45.397 "zcopy": true, 00:05:45.397 "get_zone_info": false, 00:05:45.397 "zone_management": false, 00:05:45.397 "zone_append": false, 00:05:45.397 "compare": false, 00:05:45.397 "compare_and_write": false, 00:05:45.397 "abort": true, 00:05:45.397 "seek_hole": false, 00:05:45.397 "seek_data": false, 00:05:45.397 "copy": true, 00:05:45.397 "nvme_iov_md": false 00:05:45.397 }, 00:05:45.397 "memory_domains": [ 00:05:45.397 { 00:05:45.397 "dma_device_id": "system", 00:05:45.397 "dma_device_type": 1 00:05:45.397 }, 00:05:45.398 { 00:05:45.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.398 "dma_device_type": 2 00:05:45.398 } 00:05:45.398 ], 00:05:45.398 "driver_specific": {} 00:05:45.398 } 00:05:45.398 ]' 00:05:45.398 13:19:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:45.398 13:19:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.398 13:19:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.398 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.398 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.398 13:19:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.398 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.398 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.398 13:19:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.398 13:19:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:45.398 ************************************ 00:05:45.398 END TEST rpc_plugins 00:05:45.398 ************************************ 00:05:45.398 13:19:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.398 00:05:45.398 real 0m0.163s 00:05:45.398 user 0m0.095s 00:05:45.398 sys 0m0.028s 00:05:45.398 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.398 13:19:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 13:19:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.398 13:19:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.398 13:19:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.398 13:19:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 ************************************ 00:05:45.398 START TEST rpc_trace_cmd_test 00:05:45.398 ************************************ 00:05:45.398 13:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:45.398 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:45.398 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.398 13:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.398 13:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 13:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.398 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:45.398 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68849", 00:05:45.398 "tpoint_group_mask": "0x8", 00:05:45.398 "iscsi_conn": { 00:05:45.398 "mask": "0x2", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "scsi": { 00:05:45.398 "mask": "0x4", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "bdev": { 00:05:45.398 "mask": "0x8", 00:05:45.398 "tpoint_mask": "0xffffffffffffffff" 00:05:45.398 }, 00:05:45.398 "nvmf_rdma": { 00:05:45.398 "mask": "0x10", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "nvmf_tcp": { 00:05:45.398 "mask": "0x20", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "ftl": { 00:05:45.398 "mask": "0x40", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "blobfs": { 00:05:45.398 "mask": "0x80", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "dsa": { 00:05:45.398 "mask": "0x200", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "thread": { 00:05:45.398 "mask": "0x400", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "nvme_pcie": { 00:05:45.398 "mask": "0x800", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "iaa": { 00:05:45.398 "mask": "0x1000", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "nvme_tcp": { 00:05:45.398 "mask": "0x2000", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "bdev_nvme": { 00:05:45.398 "mask": "0x4000", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "sock": { 00:05:45.398 "mask": "0x8000", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "blob": { 00:05:45.398 "mask": "0x10000", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "bdev_raid": { 00:05:45.398 "mask": "0x20000", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 }, 00:05:45.398 "scheduler": { 00:05:45.398 "mask": "0x40000", 00:05:45.398 "tpoint_mask": "0x0" 00:05:45.398 } 00:05:45.398 }' 00:05:45.398 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:45.658 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:45.658 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:45.658 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:45.658 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:45.658 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:45.658 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:45.658 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:45.658 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:45.658 ************************************ 00:05:45.658 END TEST rpc_trace_cmd_test 00:05:45.658 ************************************ 00:05:45.658 13:19:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:45.658 00:05:45.658 real 0m0.214s 00:05:45.658 user 0m0.172s 00:05:45.658 sys 0m0.033s 00:05:45.658 13:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.658 13:19:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.658 13:19:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:45.658 13:19:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:45.658 13:19:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:45.658 13:19:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.658 13:19:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.658 13:19:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.658 ************************************ 00:05:45.658 START TEST rpc_daemon_integrity 00:05:45.658 ************************************ 00:05:45.658 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:45.658 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.658 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.658 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.658 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.658 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.658 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.918 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.918 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.918 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.919 { 00:05:45.919 "name": "Malloc2", 00:05:45.919 "aliases": [ 00:05:45.919 "fdb8e6e8-64a8-4cf0-993d-c9b01184429e" 00:05:45.919 ], 00:05:45.919 "product_name": "Malloc disk", 00:05:45.919 "block_size": 512, 00:05:45.919 "num_blocks": 16384, 00:05:45.919 "uuid": "fdb8e6e8-64a8-4cf0-993d-c9b01184429e", 00:05:45.919 "assigned_rate_limits": { 00:05:45.919 "rw_ios_per_sec": 0, 00:05:45.919 "rw_mbytes_per_sec": 0, 00:05:45.919 "r_mbytes_per_sec": 0, 00:05:45.919 "w_mbytes_per_sec": 0 00:05:45.919 }, 00:05:45.919 "claimed": false, 00:05:45.919 "zoned": false, 00:05:45.919 "supported_io_types": { 00:05:45.919 "read": true, 00:05:45.919 "write": true, 00:05:45.919 "unmap": true, 00:05:45.919 "flush": true, 00:05:45.919 "reset": true, 00:05:45.919 "nvme_admin": false, 00:05:45.919 "nvme_io": false, 00:05:45.919 "nvme_io_md": false, 00:05:45.919 "write_zeroes": true, 00:05:45.919 "zcopy": true, 00:05:45.919 "get_zone_info": false, 00:05:45.919 "zone_management": false, 00:05:45.919 "zone_append": false, 00:05:45.919 "compare": false, 00:05:45.919 "compare_and_write": false, 00:05:45.919 "abort": true, 00:05:45.919 "seek_hole": false, 00:05:45.919 "seek_data": false, 00:05:45.919 "copy": true, 00:05:45.919 "nvme_iov_md": false 00:05:45.919 }, 00:05:45.919 "memory_domains": [ 00:05:45.919 { 00:05:45.919 "dma_device_id": "system", 00:05:45.919 "dma_device_type": 1 00:05:45.919 }, 00:05:45.919 { 00:05:45.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.919 "dma_device_type": 2 00:05:45.919 } 00:05:45.919 ], 00:05:45.919 "driver_specific": {} 00:05:45.919 } 00:05:45.919 ]' 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.919 [2024-11-20 13:19:27.440783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:45.919 [2024-11-20 13:19:27.440844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.919 [2024-11-20 13:19:27.440867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:45.919 [2024-11-20 13:19:27.440876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.919 [2024-11-20 13:19:27.443280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.919 [2024-11-20 13:19:27.443355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.919 Passthru0 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.919 { 00:05:45.919 "name": "Malloc2", 00:05:45.919 "aliases": [ 00:05:45.919 "fdb8e6e8-64a8-4cf0-993d-c9b01184429e" 00:05:45.919 ], 00:05:45.919 "product_name": "Malloc disk", 00:05:45.919 "block_size": 512, 00:05:45.919 "num_blocks": 16384, 00:05:45.919 "uuid": "fdb8e6e8-64a8-4cf0-993d-c9b01184429e", 00:05:45.919 "assigned_rate_limits": { 00:05:45.919 "rw_ios_per_sec": 0, 00:05:45.919 "rw_mbytes_per_sec": 0, 00:05:45.919 "r_mbytes_per_sec": 0, 00:05:45.919 "w_mbytes_per_sec": 0 00:05:45.919 }, 00:05:45.919 "claimed": true, 00:05:45.919 "claim_type": "exclusive_write", 00:05:45.919 "zoned": false, 00:05:45.919 "supported_io_types": { 00:05:45.919 "read": true, 00:05:45.919 "write": true, 00:05:45.919 "unmap": true, 00:05:45.919 "flush": true, 00:05:45.919 "reset": true, 00:05:45.919 "nvme_admin": false, 00:05:45.919 "nvme_io": false, 00:05:45.919 "nvme_io_md": false, 00:05:45.919 "write_zeroes": true, 00:05:45.919 "zcopy": true, 00:05:45.919 "get_zone_info": false, 00:05:45.919 "zone_management": false, 00:05:45.919 "zone_append": false, 00:05:45.919 "compare": false, 00:05:45.919 "compare_and_write": false, 00:05:45.919 "abort": true, 00:05:45.919 "seek_hole": false, 00:05:45.919 "seek_data": false, 00:05:45.919 "copy": true, 00:05:45.919 "nvme_iov_md": false 00:05:45.919 }, 00:05:45.919 "memory_domains": [ 00:05:45.919 { 00:05:45.919 "dma_device_id": "system", 00:05:45.919 "dma_device_type": 1 00:05:45.919 }, 00:05:45.919 { 00:05:45.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.919 "dma_device_type": 2 00:05:45.919 } 00:05:45.919 ], 00:05:45.919 "driver_specific": {} 00:05:45.919 }, 00:05:45.919 { 00:05:45.919 "name": "Passthru0", 00:05:45.919 "aliases": [ 00:05:45.919 "061f41bb-a03e-5919-98e9-84125c1e0b75" 00:05:45.919 ], 00:05:45.919 "product_name": "passthru", 00:05:45.919 "block_size": 512, 00:05:45.919 "num_blocks": 16384, 00:05:45.919 "uuid": "061f41bb-a03e-5919-98e9-84125c1e0b75", 00:05:45.919 "assigned_rate_limits": { 00:05:45.919 "rw_ios_per_sec": 0, 00:05:45.919 "rw_mbytes_per_sec": 0, 00:05:45.919 "r_mbytes_per_sec": 0, 00:05:45.919 "w_mbytes_per_sec": 0 00:05:45.919 }, 00:05:45.919 "claimed": false, 00:05:45.919 "zoned": false, 00:05:45.919 "supported_io_types": { 00:05:45.919 "read": true, 00:05:45.919 "write": true, 00:05:45.919 "unmap": true, 00:05:45.919 "flush": true, 00:05:45.919 "reset": true, 00:05:45.919 "nvme_admin": false, 00:05:45.919 "nvme_io": false, 00:05:45.919 "nvme_io_md": false, 00:05:45.919 "write_zeroes": true, 00:05:45.919 "zcopy": true, 00:05:45.919 "get_zone_info": false, 00:05:45.919 "zone_management": false, 00:05:45.919 "zone_append": false, 00:05:45.919 "compare": false, 00:05:45.919 "compare_and_write": false, 00:05:45.919 "abort": true, 00:05:45.919 "seek_hole": false, 00:05:45.919 "seek_data": false, 00:05:45.919 "copy": true, 00:05:45.919 "nvme_iov_md": false 00:05:45.919 }, 00:05:45.919 "memory_domains": [ 00:05:45.919 { 00:05:45.919 "dma_device_id": "system", 00:05:45.919 "dma_device_type": 1 00:05:45.919 }, 00:05:45.919 { 00:05:45.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.919 "dma_device_type": 2 00:05:45.919 } 00:05:45.919 ], 00:05:45.919 "driver_specific": { 00:05:45.919 "passthru": { 00:05:45.919 "name": "Passthru0", 00:05:45.919 "base_bdev_name": "Malloc2" 00:05:45.919 } 00:05:45.919 } 00:05:45.919 } 00:05:45.919 ]' 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.919 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.181 ************************************ 00:05:46.181 END TEST rpc_daemon_integrity 00:05:46.181 ************************************ 00:05:46.181 13:19:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.181 00:05:46.181 real 0m0.300s 00:05:46.181 user 0m0.185s 00:05:46.181 sys 0m0.041s 00:05:46.181 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.181 13:19:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.181 13:19:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.181 13:19:27 rpc -- rpc/rpc.sh@84 -- # killprocess 68849 00:05:46.181 13:19:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 68849 ']' 00:05:46.181 13:19:27 rpc -- common/autotest_common.sh@958 -- # kill -0 68849 00:05:46.181 13:19:27 rpc -- common/autotest_common.sh@959 -- # uname 00:05:46.181 13:19:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.181 13:19:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68849 00:05:46.181 killing process with pid 68849 00:05:46.181 13:19:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.181 13:19:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.181 13:19:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68849' 00:05:46.181 13:19:27 rpc -- common/autotest_common.sh@973 -- # kill 68849 00:05:46.181 13:19:27 rpc -- common/autotest_common.sh@978 -- # wait 68849 00:05:46.440 00:05:46.440 real 0m2.782s 00:05:46.440 user 0m3.325s 00:05:46.440 sys 0m0.841s 00:05:46.440 ************************************ 00:05:46.440 END TEST rpc 00:05:46.440 ************************************ 00:05:46.440 13:19:28 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.440 13:19:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.440 13:19:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:46.440 13:19:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.440 13:19:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.440 13:19:28 -- common/autotest_common.sh@10 -- # set +x 00:05:46.699 ************************************ 00:05:46.699 START TEST skip_rpc 00:05:46.699 ************************************ 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:46.699 * Looking for test storage... 00:05:46.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.699 13:19:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.699 --rc genhtml_branch_coverage=1 00:05:46.699 --rc genhtml_function_coverage=1 00:05:46.699 --rc genhtml_legend=1 00:05:46.699 --rc geninfo_all_blocks=1 00:05:46.699 --rc geninfo_unexecuted_blocks=1 00:05:46.699 00:05:46.699 ' 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.699 --rc genhtml_branch_coverage=1 00:05:46.699 --rc genhtml_function_coverage=1 00:05:46.699 --rc genhtml_legend=1 00:05:46.699 --rc geninfo_all_blocks=1 00:05:46.699 --rc geninfo_unexecuted_blocks=1 00:05:46.699 00:05:46.699 ' 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.699 --rc genhtml_branch_coverage=1 00:05:46.699 --rc genhtml_function_coverage=1 00:05:46.699 --rc genhtml_legend=1 00:05:46.699 --rc geninfo_all_blocks=1 00:05:46.699 --rc geninfo_unexecuted_blocks=1 00:05:46.699 00:05:46.699 ' 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.699 --rc genhtml_branch_coverage=1 00:05:46.699 --rc genhtml_function_coverage=1 00:05:46.699 --rc genhtml_legend=1 00:05:46.699 --rc geninfo_all_blocks=1 00:05:46.699 --rc geninfo_unexecuted_blocks=1 00:05:46.699 00:05:46.699 ' 00:05:46.699 13:19:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:46.699 13:19:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.699 13:19:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.699 13:19:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.699 ************************************ 00:05:46.699 START TEST skip_rpc 00:05:46.699 ************************************ 00:05:46.699 13:19:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:46.699 13:19:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69050 00:05:46.699 13:19:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:46.699 13:19:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.699 13:19:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:46.960 [2024-11-20 13:19:28.437375] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:05:46.960 [2024-11-20 13:19:28.437572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69050 ] 00:05:46.960 [2024-11-20 13:19:28.593590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.960 [2024-11-20 13:19:28.618553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69050 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69050 ']' 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69050 00:05:52.270 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:52.271 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.271 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69050 00:05:52.271 killing process with pid 69050 00:05:52.271 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.271 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.271 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69050' 00:05:52.271 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69050 00:05:52.271 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69050 00:05:52.271 ************************************ 00:05:52.271 END TEST skip_rpc 00:05:52.271 ************************************ 00:05:52.271 00:05:52.271 real 0m5.402s 00:05:52.271 user 0m5.027s 00:05:52.271 sys 0m0.292s 00:05:52.271 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.271 13:19:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.271 13:19:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:52.271 13:19:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.271 13:19:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.271 13:19:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.271 ************************************ 00:05:52.271 START TEST skip_rpc_with_json 00:05:52.271 ************************************ 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:52.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69138 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69138 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69138 ']' 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.271 13:19:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.271 [2024-11-20 13:19:33.899376] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:05:52.271 [2024-11-20 13:19:33.899503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69138 ] 00:05:52.536 [2024-11-20 13:19:34.052146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.536 [2024-11-20 13:19:34.077242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.110 [2024-11-20 13:19:34.702928] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:53.110 request: 00:05:53.110 { 00:05:53.110 "trtype": "tcp", 00:05:53.110 "method": "nvmf_get_transports", 00:05:53.110 "req_id": 1 00:05:53.110 } 00:05:53.110 Got JSON-RPC error response 00:05:53.110 response: 00:05:53.110 { 00:05:53.110 "code": -19, 00:05:53.110 "message": "No such device" 00:05:53.110 } 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.110 [2024-11-20 13:19:34.711071] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.110 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.371 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.371 13:19:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.371 { 00:05:53.371 "subsystems": [ 00:05:53.371 { 00:05:53.371 "subsystem": "fsdev", 00:05:53.371 "config": [ 00:05:53.371 { 00:05:53.371 "method": "fsdev_set_opts", 00:05:53.371 "params": { 00:05:53.371 "fsdev_io_pool_size": 65535, 00:05:53.371 "fsdev_io_cache_size": 256 00:05:53.371 } 00:05:53.371 } 00:05:53.371 ] 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "keyring", 00:05:53.371 "config": [] 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "iobuf", 00:05:53.371 "config": [ 00:05:53.371 { 00:05:53.371 "method": "iobuf_set_options", 00:05:53.371 "params": { 00:05:53.371 "small_pool_count": 8192, 00:05:53.371 "large_pool_count": 1024, 00:05:53.371 "small_bufsize": 8192, 00:05:53.371 "large_bufsize": 135168, 00:05:53.371 "enable_numa": false 00:05:53.371 } 00:05:53.371 } 00:05:53.371 ] 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "sock", 00:05:53.371 "config": [ 00:05:53.371 { 00:05:53.371 "method": "sock_set_default_impl", 00:05:53.371 "params": { 00:05:53.371 "impl_name": "posix" 00:05:53.371 } 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "method": "sock_impl_set_options", 00:05:53.371 "params": { 00:05:53.371 "impl_name": "ssl", 00:05:53.371 "recv_buf_size": 4096, 00:05:53.371 "send_buf_size": 4096, 00:05:53.371 "enable_recv_pipe": true, 00:05:53.371 "enable_quickack": false, 00:05:53.371 "enable_placement_id": 0, 00:05:53.371 "enable_zerocopy_send_server": true, 00:05:53.371 "enable_zerocopy_send_client": false, 00:05:53.371 "zerocopy_threshold": 0, 00:05:53.371 "tls_version": 0, 00:05:53.371 "enable_ktls": false 00:05:53.371 } 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "method": "sock_impl_set_options", 00:05:53.371 "params": { 00:05:53.371 "impl_name": "posix", 00:05:53.371 "recv_buf_size": 2097152, 00:05:53.371 "send_buf_size": 2097152, 00:05:53.371 "enable_recv_pipe": true, 00:05:53.371 "enable_quickack": false, 00:05:53.371 "enable_placement_id": 0, 00:05:53.371 "enable_zerocopy_send_server": true, 00:05:53.371 "enable_zerocopy_send_client": false, 00:05:53.371 "zerocopy_threshold": 0, 00:05:53.371 "tls_version": 0, 00:05:53.371 "enable_ktls": false 00:05:53.371 } 00:05:53.371 } 00:05:53.371 ] 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "vmd", 00:05:53.371 "config": [] 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "accel", 00:05:53.371 "config": [ 00:05:53.371 { 00:05:53.371 "method": "accel_set_options", 00:05:53.371 "params": { 00:05:53.371 "small_cache_size": 128, 00:05:53.371 "large_cache_size": 16, 00:05:53.371 "task_count": 2048, 00:05:53.371 "sequence_count": 2048, 00:05:53.371 "buf_count": 2048 00:05:53.371 } 00:05:53.371 } 00:05:53.371 ] 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "bdev", 00:05:53.371 "config": [ 00:05:53.371 { 00:05:53.371 "method": "bdev_set_options", 00:05:53.371 "params": { 00:05:53.371 "bdev_io_pool_size": 65535, 00:05:53.371 "bdev_io_cache_size": 256, 00:05:53.371 "bdev_auto_examine": true, 00:05:53.371 "iobuf_small_cache_size": 128, 00:05:53.371 "iobuf_large_cache_size": 16 00:05:53.371 } 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "method": "bdev_raid_set_options", 00:05:53.371 "params": { 00:05:53.371 "process_window_size_kb": 1024, 00:05:53.371 "process_max_bandwidth_mb_sec": 0 00:05:53.371 } 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "method": "bdev_iscsi_set_options", 00:05:53.371 "params": { 00:05:53.371 "timeout_sec": 30 00:05:53.371 } 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "method": "bdev_nvme_set_options", 00:05:53.371 "params": { 00:05:53.371 "action_on_timeout": "none", 00:05:53.371 "timeout_us": 0, 00:05:53.371 "timeout_admin_us": 0, 00:05:53.371 "keep_alive_timeout_ms": 10000, 00:05:53.371 "arbitration_burst": 0, 00:05:53.371 "low_priority_weight": 0, 00:05:53.371 "medium_priority_weight": 0, 00:05:53.371 "high_priority_weight": 0, 00:05:53.371 "nvme_adminq_poll_period_us": 10000, 00:05:53.371 "nvme_ioq_poll_period_us": 0, 00:05:53.371 "io_queue_requests": 0, 00:05:53.371 "delay_cmd_submit": true, 00:05:53.371 "transport_retry_count": 4, 00:05:53.371 "bdev_retry_count": 3, 00:05:53.371 "transport_ack_timeout": 0, 00:05:53.371 "ctrlr_loss_timeout_sec": 0, 00:05:53.371 "reconnect_delay_sec": 0, 00:05:53.371 "fast_io_fail_timeout_sec": 0, 00:05:53.371 "disable_auto_failback": false, 00:05:53.371 "generate_uuids": false, 00:05:53.371 "transport_tos": 0, 00:05:53.371 "nvme_error_stat": false, 00:05:53.371 "rdma_srq_size": 0, 00:05:53.371 "io_path_stat": false, 00:05:53.371 "allow_accel_sequence": false, 00:05:53.371 "rdma_max_cq_size": 0, 00:05:53.371 "rdma_cm_event_timeout_ms": 0, 00:05:53.371 "dhchap_digests": [ 00:05:53.371 "sha256", 00:05:53.371 "sha384", 00:05:53.371 "sha512" 00:05:53.371 ], 00:05:53.371 "dhchap_dhgroups": [ 00:05:53.371 "null", 00:05:53.371 "ffdhe2048", 00:05:53.371 "ffdhe3072", 00:05:53.371 "ffdhe4096", 00:05:53.371 "ffdhe6144", 00:05:53.371 "ffdhe8192" 00:05:53.371 ] 00:05:53.371 } 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "method": "bdev_nvme_set_hotplug", 00:05:53.371 "params": { 00:05:53.371 "period_us": 100000, 00:05:53.371 "enable": false 00:05:53.371 } 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "method": "bdev_wait_for_examine" 00:05:53.371 } 00:05:53.371 ] 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "scsi", 00:05:53.371 "config": null 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "scheduler", 00:05:53.371 "config": [ 00:05:53.371 { 00:05:53.371 "method": "framework_set_scheduler", 00:05:53.371 "params": { 00:05:53.371 "name": "static" 00:05:53.371 } 00:05:53.371 } 00:05:53.371 ] 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "vhost_scsi", 00:05:53.371 "config": [] 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "vhost_blk", 00:05:53.371 "config": [] 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "ublk", 00:05:53.371 "config": [] 00:05:53.371 }, 00:05:53.371 { 00:05:53.371 "subsystem": "nbd", 00:05:53.372 "config": [] 00:05:53.372 }, 00:05:53.372 { 00:05:53.372 "subsystem": "nvmf", 00:05:53.372 "config": [ 00:05:53.372 { 00:05:53.372 "method": "nvmf_set_config", 00:05:53.372 "params": { 00:05:53.372 "discovery_filter": "match_any", 00:05:53.372 "admin_cmd_passthru": { 00:05:53.372 "identify_ctrlr": false 00:05:53.372 }, 00:05:53.372 "dhchap_digests": [ 00:05:53.372 "sha256", 00:05:53.372 "sha384", 00:05:53.372 "sha512" 00:05:53.372 ], 00:05:53.372 "dhchap_dhgroups": [ 00:05:53.372 "null", 00:05:53.372 "ffdhe2048", 00:05:53.372 "ffdhe3072", 00:05:53.372 "ffdhe4096", 00:05:53.372 "ffdhe6144", 00:05:53.372 "ffdhe8192" 00:05:53.372 ] 00:05:53.372 } 00:05:53.372 }, 00:05:53.372 { 00:05:53.372 "method": "nvmf_set_max_subsystems", 00:05:53.372 "params": { 00:05:53.372 "max_subsystems": 1024 00:05:53.372 } 00:05:53.372 }, 00:05:53.372 { 00:05:53.372 "method": "nvmf_set_crdt", 00:05:53.372 "params": { 00:05:53.372 "crdt1": 0, 00:05:53.372 "crdt2": 0, 00:05:53.372 "crdt3": 0 00:05:53.372 } 00:05:53.372 }, 00:05:53.372 { 00:05:53.372 "method": "nvmf_create_transport", 00:05:53.372 "params": { 00:05:53.372 "trtype": "TCP", 00:05:53.372 "max_queue_depth": 128, 00:05:53.372 "max_io_qpairs_per_ctrlr": 127, 00:05:53.372 "in_capsule_data_size": 4096, 00:05:53.372 "max_io_size": 131072, 00:05:53.372 "io_unit_size": 131072, 00:05:53.372 "max_aq_depth": 128, 00:05:53.372 "num_shared_buffers": 511, 00:05:53.372 "buf_cache_size": 4294967295, 00:05:53.372 "dif_insert_or_strip": false, 00:05:53.372 "zcopy": false, 00:05:53.372 "c2h_success": true, 00:05:53.372 "sock_priority": 0, 00:05:53.372 "abort_timeout_sec": 1, 00:05:53.372 "ack_timeout": 0, 00:05:53.372 "data_wr_pool_size": 0 00:05:53.372 } 00:05:53.372 } 00:05:53.372 ] 00:05:53.372 }, 00:05:53.372 { 00:05:53.372 "subsystem": "iscsi", 00:05:53.372 "config": [ 00:05:53.372 { 00:05:53.372 "method": "iscsi_set_options", 00:05:53.372 "params": { 00:05:53.372 "node_base": "iqn.2016-06.io.spdk", 00:05:53.372 "max_sessions": 128, 00:05:53.372 "max_connections_per_session": 2, 00:05:53.372 "max_queue_depth": 64, 00:05:53.372 "default_time2wait": 2, 00:05:53.372 "default_time2retain": 20, 00:05:53.372 "first_burst_length": 8192, 00:05:53.372 "immediate_data": true, 00:05:53.372 "allow_duplicated_isid": false, 00:05:53.372 "error_recovery_level": 0, 00:05:53.372 "nop_timeout": 60, 00:05:53.372 "nop_in_interval": 30, 00:05:53.372 "disable_chap": false, 00:05:53.372 "require_chap": false, 00:05:53.372 "mutual_chap": false, 00:05:53.372 "chap_group": 0, 00:05:53.372 "max_large_datain_per_connection": 64, 00:05:53.372 "max_r2t_per_connection": 4, 00:05:53.372 "pdu_pool_size": 36864, 00:05:53.372 "immediate_data_pool_size": 16384, 00:05:53.372 "data_out_pool_size": 2048 00:05:53.372 } 00:05:53.372 } 00:05:53.372 ] 00:05:53.372 } 00:05:53.372 ] 00:05:53.372 } 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69138 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69138 ']' 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69138 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69138 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.372 killing process with pid 69138 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69138' 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69138 00:05:53.372 13:19:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69138 00:05:53.632 13:19:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69161 00:05:53.632 13:19:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:53.632 13:19:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:58.975 13:19:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69161 00:05:58.975 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69161 ']' 00:05:58.975 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69161 00:05:58.975 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:58.975 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.975 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69161 00:05:58.975 killing process with pid 69161 00:05:58.975 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.975 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.975 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69161' 00:05:58.975 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69161 00:05:58.975 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69161 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.234 ************************************ 00:05:59.234 END TEST skip_rpc_with_json 00:05:59.234 ************************************ 00:05:59.234 00:05:59.234 real 0m6.862s 00:05:59.234 user 0m6.373s 00:05:59.234 sys 0m0.720s 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.234 13:19:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:59.234 13:19:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.234 13:19:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.234 13:19:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.234 ************************************ 00:05:59.234 START TEST skip_rpc_with_delay 00:05:59.234 ************************************ 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.234 [2024-11-20 13:19:40.827092] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.234 00:05:59.234 real 0m0.153s 00:05:59.234 user 0m0.080s 00:05:59.234 sys 0m0.071s 00:05:59.234 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.235 13:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:59.235 ************************************ 00:05:59.235 END TEST skip_rpc_with_delay 00:05:59.235 ************************************ 00:05:59.494 13:19:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:59.494 13:19:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:59.494 13:19:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:59.494 13:19:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.494 13:19:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.494 13:19:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.494 ************************************ 00:05:59.494 START TEST exit_on_failed_rpc_init 00:05:59.494 ************************************ 00:05:59.494 13:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:59.494 13:19:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69278 00:05:59.494 13:19:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.494 13:19:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69278 00:05:59.494 13:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69278 ']' 00:05:59.495 13:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.495 13:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.495 13:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.495 13:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.495 13:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.495 [2024-11-20 13:19:41.048702] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:05:59.495 [2024-11-20 13:19:41.048936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69278 ] 00:05:59.755 [2024-11-20 13:19:41.206717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.755 [2024-11-20 13:19:41.231320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:00.324 13:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.324 [2024-11-20 13:19:41.953057] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:00.324 [2024-11-20 13:19:41.953279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69296 ] 00:06:00.584 [2024-11-20 13:19:42.101832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.584 [2024-11-20 13:19:42.127320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.584 [2024-11-20 13:19:42.127511] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:00.584 [2024-11-20 13:19:42.127566] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:00.584 [2024-11-20 13:19:42.127587] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69278 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69278 ']' 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69278 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.584 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69278 00:06:00.844 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.844 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.844 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69278' 00:06:00.844 killing process with pid 69278 00:06:00.844 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69278 00:06:00.844 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69278 00:06:01.104 00:06:01.104 real 0m1.655s 00:06:01.104 user 0m1.743s 00:06:01.104 sys 0m0.478s 00:06:01.104 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.104 13:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.104 ************************************ 00:06:01.104 END TEST exit_on_failed_rpc_init 00:06:01.104 ************************************ 00:06:01.104 13:19:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:01.104 00:06:01.104 real 0m14.566s 00:06:01.104 user 0m13.430s 00:06:01.104 sys 0m1.861s 00:06:01.104 13:19:42 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.104 13:19:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.104 ************************************ 00:06:01.104 END TEST skip_rpc 00:06:01.104 ************************************ 00:06:01.104 13:19:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:01.104 13:19:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.104 13:19:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.104 13:19:42 -- common/autotest_common.sh@10 -- # set +x 00:06:01.104 ************************************ 00:06:01.104 START TEST rpc_client 00:06:01.104 ************************************ 00:06:01.104 13:19:42 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:01.364 * Looking for test storage... 00:06:01.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:01.364 13:19:42 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.364 13:19:42 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.364 13:19:42 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.364 13:19:42 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.364 13:19:42 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:01.364 13:19:42 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.364 13:19:42 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.364 --rc genhtml_branch_coverage=1 00:06:01.364 --rc genhtml_function_coverage=1 00:06:01.364 --rc genhtml_legend=1 00:06:01.364 --rc geninfo_all_blocks=1 00:06:01.364 --rc geninfo_unexecuted_blocks=1 00:06:01.364 00:06:01.364 ' 00:06:01.364 13:19:42 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.364 --rc genhtml_branch_coverage=1 00:06:01.364 --rc genhtml_function_coverage=1 00:06:01.364 --rc genhtml_legend=1 00:06:01.364 --rc geninfo_all_blocks=1 00:06:01.364 --rc geninfo_unexecuted_blocks=1 00:06:01.364 00:06:01.364 ' 00:06:01.364 13:19:42 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.364 --rc genhtml_branch_coverage=1 00:06:01.364 --rc genhtml_function_coverage=1 00:06:01.364 --rc genhtml_legend=1 00:06:01.364 --rc geninfo_all_blocks=1 00:06:01.364 --rc geninfo_unexecuted_blocks=1 00:06:01.364 00:06:01.364 ' 00:06:01.364 13:19:42 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.364 --rc genhtml_branch_coverage=1 00:06:01.364 --rc genhtml_function_coverage=1 00:06:01.364 --rc genhtml_legend=1 00:06:01.364 --rc geninfo_all_blocks=1 00:06:01.364 --rc geninfo_unexecuted_blocks=1 00:06:01.364 00:06:01.364 ' 00:06:01.364 13:19:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:01.364 OK 00:06:01.364 13:19:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:01.364 ************************************ 00:06:01.364 END TEST rpc_client 00:06:01.364 ************************************ 00:06:01.364 00:06:01.364 real 0m0.278s 00:06:01.364 user 0m0.149s 00:06:01.364 sys 0m0.143s 00:06:01.364 13:19:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.364 13:19:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:01.625 13:19:43 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:01.625 13:19:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.625 13:19:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.625 13:19:43 -- common/autotest_common.sh@10 -- # set +x 00:06:01.625 ************************************ 00:06:01.625 START TEST json_config 00:06:01.625 ************************************ 00:06:01.625 13:19:43 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:01.625 13:19:43 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.625 13:19:43 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.625 13:19:43 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.625 13:19:43 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.625 13:19:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.625 13:19:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.625 13:19:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.625 13:19:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.625 13:19:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.625 13:19:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.625 13:19:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.625 13:19:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.625 13:19:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.625 13:19:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.625 13:19:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.625 13:19:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:01.625 13:19:43 json_config -- scripts/common.sh@345 -- # : 1 00:06:01.625 13:19:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.625 13:19:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.625 13:19:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:01.625 13:19:43 json_config -- scripts/common.sh@353 -- # local d=1 00:06:01.625 13:19:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.625 13:19:43 json_config -- scripts/common.sh@355 -- # echo 1 00:06:01.625 13:19:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.625 13:19:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:01.625 13:19:43 json_config -- scripts/common.sh@353 -- # local d=2 00:06:01.625 13:19:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.625 13:19:43 json_config -- scripts/common.sh@355 -- # echo 2 00:06:01.625 13:19:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.625 13:19:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.625 13:19:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.625 13:19:43 json_config -- scripts/common.sh@368 -- # return 0 00:06:01.625 13:19:43 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.625 13:19:43 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.625 --rc genhtml_branch_coverage=1 00:06:01.625 --rc genhtml_function_coverage=1 00:06:01.625 --rc genhtml_legend=1 00:06:01.625 --rc geninfo_all_blocks=1 00:06:01.625 --rc geninfo_unexecuted_blocks=1 00:06:01.625 00:06:01.625 ' 00:06:01.625 13:19:43 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.625 --rc genhtml_branch_coverage=1 00:06:01.625 --rc genhtml_function_coverage=1 00:06:01.625 --rc genhtml_legend=1 00:06:01.625 --rc geninfo_all_blocks=1 00:06:01.625 --rc geninfo_unexecuted_blocks=1 00:06:01.625 00:06:01.625 ' 00:06:01.625 13:19:43 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.625 --rc genhtml_branch_coverage=1 00:06:01.625 --rc genhtml_function_coverage=1 00:06:01.625 --rc genhtml_legend=1 00:06:01.625 --rc geninfo_all_blocks=1 00:06:01.625 --rc geninfo_unexecuted_blocks=1 00:06:01.625 00:06:01.625 ' 00:06:01.625 13:19:43 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.625 --rc genhtml_branch_coverage=1 00:06:01.625 --rc genhtml_function_coverage=1 00:06:01.625 --rc genhtml_legend=1 00:06:01.625 --rc geninfo_all_blocks=1 00:06:01.625 --rc geninfo_unexecuted_blocks=1 00:06:01.625 00:06:01.625 ' 00:06:01.625 13:19:43 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:01.625 13:19:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:01.626 13:19:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.626 13:19:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.626 13:19:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.626 13:19:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.626 13:19:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.626 13:19:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.626 13:19:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.626 13:19:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.626 13:19:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.626 13:19:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ffda71aa-3258-4bae-910a-531305c80dfb 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ffda71aa-3258-4bae-910a-531305c80dfb 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:01.886 13:19:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:01.886 13:19:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.886 13:19:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.886 13:19:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.886 13:19:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.886 13:19:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.886 13:19:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.886 13:19:43 json_config -- paths/export.sh@5 -- # export PATH 00:06:01.886 13:19:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@51 -- # : 0 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:01.886 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:01.886 13:19:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:01.886 13:19:43 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:01.886 13:19:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:01.886 13:19:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:01.886 13:19:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:01.886 13:19:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:01.886 13:19:43 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:01.886 WARNING: No tests are enabled so not running JSON configuration tests 00:06:01.886 13:19:43 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:01.886 00:06:01.886 real 0m0.234s 00:06:01.886 user 0m0.131s 00:06:01.886 sys 0m0.107s 00:06:01.886 13:19:43 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.886 13:19:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.886 ************************************ 00:06:01.886 END TEST json_config 00:06:01.886 ************************************ 00:06:01.886 13:19:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:01.886 13:19:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.886 13:19:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.886 13:19:43 -- common/autotest_common.sh@10 -- # set +x 00:06:01.886 ************************************ 00:06:01.886 START TEST json_config_extra_key 00:06:01.886 ************************************ 00:06:01.886 13:19:43 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:01.886 13:19:43 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.886 13:19:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.886 13:19:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.886 13:19:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:01.886 13:19:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.887 13:19:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:01.887 13:19:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.887 13:19:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:02.147 13:19:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:02.147 13:19:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.147 13:19:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:02.147 13:19:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.147 13:19:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.147 13:19:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.147 13:19:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:02.147 13:19:43 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.147 13:19:43 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.147 --rc genhtml_branch_coverage=1 00:06:02.147 --rc genhtml_function_coverage=1 00:06:02.147 --rc genhtml_legend=1 00:06:02.147 --rc geninfo_all_blocks=1 00:06:02.147 --rc geninfo_unexecuted_blocks=1 00:06:02.147 00:06:02.147 ' 00:06:02.147 13:19:43 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.147 --rc genhtml_branch_coverage=1 00:06:02.147 --rc genhtml_function_coverage=1 00:06:02.147 --rc genhtml_legend=1 00:06:02.147 --rc geninfo_all_blocks=1 00:06:02.147 --rc geninfo_unexecuted_blocks=1 00:06:02.147 00:06:02.147 ' 00:06:02.147 13:19:43 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.147 --rc genhtml_branch_coverage=1 00:06:02.147 --rc genhtml_function_coverage=1 00:06:02.147 --rc genhtml_legend=1 00:06:02.147 --rc geninfo_all_blocks=1 00:06:02.147 --rc geninfo_unexecuted_blocks=1 00:06:02.147 00:06:02.147 ' 00:06:02.147 13:19:43 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.147 --rc genhtml_branch_coverage=1 00:06:02.147 --rc genhtml_function_coverage=1 00:06:02.147 --rc genhtml_legend=1 00:06:02.147 --rc geninfo_all_blocks=1 00:06:02.147 --rc geninfo_unexecuted_blocks=1 00:06:02.147 00:06:02.147 ' 00:06:02.147 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ffda71aa-3258-4bae-910a-531305c80dfb 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ffda71aa-3258-4bae-910a-531305c80dfb 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.147 13:19:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.147 13:19:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.147 13:19:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.147 13:19:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.147 13:19:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.147 13:19:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.147 13:19:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.148 13:19:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.148 13:19:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:02.148 13:19:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.148 13:19:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:02.148 13:19:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.148 13:19:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.148 13:19:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.148 13:19:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.148 13:19:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.148 13:19:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.148 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.148 13:19:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.148 13:19:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.148 13:19:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:02.148 INFO: launching applications... 00:06:02.148 13:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:02.148 13:19:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:02.148 13:19:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:02.148 13:19:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:02.148 13:19:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:02.148 13:19:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:02.148 13:19:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.148 13:19:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.148 13:19:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69473 00:06:02.148 13:19:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:02.148 Waiting for target to run... 00:06:02.148 13:19:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69473 /var/tmp/spdk_tgt.sock 00:06:02.148 13:19:43 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:02.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.148 13:19:43 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69473 ']' 00:06:02.148 13:19:43 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.148 13:19:43 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.148 13:19:43 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.148 13:19:43 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.148 13:19:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:02.148 [2024-11-20 13:19:43.702604] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:02.148 [2024-11-20 13:19:43.702820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69473 ] 00:06:02.717 [2024-11-20 13:19:44.079696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.717 [2024-11-20 13:19:44.095593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.977 13:19:44 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.977 13:19:44 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:02.977 13:19:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:02.977 00:06:02.977 13:19:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:02.977 INFO: shutting down applications... 00:06:02.977 13:19:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:02.977 13:19:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:02.977 13:19:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:02.977 13:19:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69473 ]] 00:06:02.977 13:19:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69473 00:06:02.977 13:19:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:02.977 13:19:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.977 13:19:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69473 00:06:02.977 13:19:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:03.547 13:19:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:03.547 13:19:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.547 13:19:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69473 00:06:03.547 13:19:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:03.547 13:19:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:03.547 13:19:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:03.547 13:19:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:03.547 SPDK target shutdown done 00:06:03.547 13:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:03.547 Success 00:06:03.547 00:06:03.547 real 0m1.647s 00:06:03.547 user 0m1.342s 00:06:03.547 sys 0m0.466s 00:06:03.547 13:19:45 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.547 13:19:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:03.547 ************************************ 00:06:03.547 END TEST json_config_extra_key 00:06:03.547 ************************************ 00:06:03.547 13:19:45 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.547 13:19:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.547 13:19:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.547 13:19:45 -- common/autotest_common.sh@10 -- # set +x 00:06:03.547 ************************************ 00:06:03.547 START TEST alias_rpc 00:06:03.547 ************************************ 00:06:03.547 13:19:45 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.547 * Looking for test storage... 00:06:03.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.807 13:19:45 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.807 --rc genhtml_branch_coverage=1 00:06:03.807 --rc genhtml_function_coverage=1 00:06:03.807 --rc genhtml_legend=1 00:06:03.807 --rc geninfo_all_blocks=1 00:06:03.807 --rc geninfo_unexecuted_blocks=1 00:06:03.807 00:06:03.807 ' 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.807 --rc genhtml_branch_coverage=1 00:06:03.807 --rc genhtml_function_coverage=1 00:06:03.807 --rc genhtml_legend=1 00:06:03.807 --rc geninfo_all_blocks=1 00:06:03.807 --rc geninfo_unexecuted_blocks=1 00:06:03.807 00:06:03.807 ' 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.807 --rc genhtml_branch_coverage=1 00:06:03.807 --rc genhtml_function_coverage=1 00:06:03.807 --rc genhtml_legend=1 00:06:03.807 --rc geninfo_all_blocks=1 00:06:03.807 --rc geninfo_unexecuted_blocks=1 00:06:03.807 00:06:03.807 ' 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.807 --rc genhtml_branch_coverage=1 00:06:03.807 --rc genhtml_function_coverage=1 00:06:03.807 --rc genhtml_legend=1 00:06:03.807 --rc geninfo_all_blocks=1 00:06:03.807 --rc geninfo_unexecuted_blocks=1 00:06:03.807 00:06:03.807 ' 00:06:03.807 13:19:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.807 13:19:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69552 00:06:03.807 13:19:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.807 13:19:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69552 00:06:03.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69552 ']' 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.807 13:19:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.807 [2024-11-20 13:19:45.415642] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:03.807 [2024-11-20 13:19:45.415869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69552 ] 00:06:04.067 [2024-11-20 13:19:45.567427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.067 [2024-11-20 13:19:45.595327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.637 13:19:46 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.637 13:19:46 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.637 13:19:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:04.897 13:19:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69552 00:06:04.897 13:19:46 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69552 ']' 00:06:04.897 13:19:46 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69552 00:06:04.897 13:19:46 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:04.897 13:19:46 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.897 13:19:46 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69552 00:06:04.897 killing process with pid 69552 00:06:04.897 13:19:46 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.897 13:19:46 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.897 13:19:46 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69552' 00:06:04.897 13:19:46 alias_rpc -- common/autotest_common.sh@973 -- # kill 69552 00:06:04.897 13:19:46 alias_rpc -- common/autotest_common.sh@978 -- # wait 69552 00:06:05.466 ************************************ 00:06:05.466 END TEST alias_rpc 00:06:05.466 ************************************ 00:06:05.466 00:06:05.466 real 0m1.776s 00:06:05.466 user 0m1.812s 00:06:05.466 sys 0m0.507s 00:06:05.466 13:19:46 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.466 13:19:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.466 13:19:46 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:05.466 13:19:46 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:05.466 13:19:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.466 13:19:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.466 13:19:46 -- common/autotest_common.sh@10 -- # set +x 00:06:05.466 ************************************ 00:06:05.466 START TEST spdkcli_tcp 00:06:05.466 ************************************ 00:06:05.466 13:19:46 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:05.466 * Looking for test storage... 00:06:05.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:05.466 13:19:47 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.466 13:19:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.466 13:19:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.466 13:19:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.466 13:19:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:05.466 13:19:47 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.466 13:19:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.466 --rc genhtml_branch_coverage=1 00:06:05.466 --rc genhtml_function_coverage=1 00:06:05.466 --rc genhtml_legend=1 00:06:05.466 --rc geninfo_all_blocks=1 00:06:05.466 --rc geninfo_unexecuted_blocks=1 00:06:05.466 00:06:05.466 ' 00:06:05.466 13:19:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.466 --rc genhtml_branch_coverage=1 00:06:05.466 --rc genhtml_function_coverage=1 00:06:05.466 --rc genhtml_legend=1 00:06:05.466 --rc geninfo_all_blocks=1 00:06:05.466 --rc geninfo_unexecuted_blocks=1 00:06:05.466 00:06:05.466 ' 00:06:05.466 13:19:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.466 --rc genhtml_branch_coverage=1 00:06:05.466 --rc genhtml_function_coverage=1 00:06:05.466 --rc genhtml_legend=1 00:06:05.466 --rc geninfo_all_blocks=1 00:06:05.466 --rc geninfo_unexecuted_blocks=1 00:06:05.466 00:06:05.466 ' 00:06:05.466 13:19:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.466 --rc genhtml_branch_coverage=1 00:06:05.466 --rc genhtml_function_coverage=1 00:06:05.466 --rc genhtml_legend=1 00:06:05.466 --rc geninfo_all_blocks=1 00:06:05.466 --rc geninfo_unexecuted_blocks=1 00:06:05.466 00:06:05.466 ' 00:06:05.466 13:19:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:05.467 13:19:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:05.467 13:19:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:05.467 13:19:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:05.467 13:19:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:05.726 13:19:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:05.726 13:19:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:05.726 13:19:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.726 13:19:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.726 13:19:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69637 00:06:05.726 13:19:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:05.726 13:19:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69637 00:06:05.726 13:19:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69637 ']' 00:06:05.726 13:19:47 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.726 13:19:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.726 13:19:47 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.726 13:19:47 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.726 13:19:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.726 [2024-11-20 13:19:47.227885] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:05.726 [2024-11-20 13:19:47.228152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69637 ] 00:06:05.726 [2024-11-20 13:19:47.383668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.986 [2024-11-20 13:19:47.409893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.986 [2024-11-20 13:19:47.409984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.554 13:19:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.554 13:19:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:06.554 13:19:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:06.554 13:19:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69654 00:06:06.554 13:19:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:06.814 [ 00:06:06.814 "bdev_malloc_delete", 00:06:06.814 "bdev_malloc_create", 00:06:06.814 "bdev_null_resize", 00:06:06.814 "bdev_null_delete", 00:06:06.814 "bdev_null_create", 00:06:06.814 "bdev_nvme_cuse_unregister", 00:06:06.814 "bdev_nvme_cuse_register", 00:06:06.814 "bdev_opal_new_user", 00:06:06.814 "bdev_opal_set_lock_state", 00:06:06.814 "bdev_opal_delete", 00:06:06.814 "bdev_opal_get_info", 00:06:06.814 "bdev_opal_create", 00:06:06.814 "bdev_nvme_opal_revert", 00:06:06.814 "bdev_nvme_opal_init", 00:06:06.815 "bdev_nvme_send_cmd", 00:06:06.815 "bdev_nvme_set_keys", 00:06:06.815 "bdev_nvme_get_path_iostat", 00:06:06.815 "bdev_nvme_get_mdns_discovery_info", 00:06:06.815 "bdev_nvme_stop_mdns_discovery", 00:06:06.815 "bdev_nvme_start_mdns_discovery", 00:06:06.815 "bdev_nvme_set_multipath_policy", 00:06:06.815 "bdev_nvme_set_preferred_path", 00:06:06.815 "bdev_nvme_get_io_paths", 00:06:06.815 "bdev_nvme_remove_error_injection", 00:06:06.815 "bdev_nvme_add_error_injection", 00:06:06.815 "bdev_nvme_get_discovery_info", 00:06:06.815 "bdev_nvme_stop_discovery", 00:06:06.815 "bdev_nvme_start_discovery", 00:06:06.815 "bdev_nvme_get_controller_health_info", 00:06:06.815 "bdev_nvme_disable_controller", 00:06:06.815 "bdev_nvme_enable_controller", 00:06:06.815 "bdev_nvme_reset_controller", 00:06:06.815 "bdev_nvme_get_transport_statistics", 00:06:06.815 "bdev_nvme_apply_firmware", 00:06:06.815 "bdev_nvme_detach_controller", 00:06:06.815 "bdev_nvme_get_controllers", 00:06:06.815 "bdev_nvme_attach_controller", 00:06:06.815 "bdev_nvme_set_hotplug", 00:06:06.815 "bdev_nvme_set_options", 00:06:06.815 "bdev_passthru_delete", 00:06:06.815 "bdev_passthru_create", 00:06:06.815 "bdev_lvol_set_parent_bdev", 00:06:06.815 "bdev_lvol_set_parent", 00:06:06.815 "bdev_lvol_check_shallow_copy", 00:06:06.815 "bdev_lvol_start_shallow_copy", 00:06:06.815 "bdev_lvol_grow_lvstore", 00:06:06.815 "bdev_lvol_get_lvols", 00:06:06.815 "bdev_lvol_get_lvstores", 00:06:06.815 "bdev_lvol_delete", 00:06:06.815 "bdev_lvol_set_read_only", 00:06:06.815 "bdev_lvol_resize", 00:06:06.815 "bdev_lvol_decouple_parent", 00:06:06.815 "bdev_lvol_inflate", 00:06:06.815 "bdev_lvol_rename", 00:06:06.815 "bdev_lvol_clone_bdev", 00:06:06.815 "bdev_lvol_clone", 00:06:06.815 "bdev_lvol_snapshot", 00:06:06.815 "bdev_lvol_create", 00:06:06.815 "bdev_lvol_delete_lvstore", 00:06:06.815 "bdev_lvol_rename_lvstore", 00:06:06.815 "bdev_lvol_create_lvstore", 00:06:06.815 "bdev_raid_set_options", 00:06:06.815 "bdev_raid_remove_base_bdev", 00:06:06.815 "bdev_raid_add_base_bdev", 00:06:06.815 "bdev_raid_delete", 00:06:06.815 "bdev_raid_create", 00:06:06.815 "bdev_raid_get_bdevs", 00:06:06.815 "bdev_error_inject_error", 00:06:06.815 "bdev_error_delete", 00:06:06.815 "bdev_error_create", 00:06:06.815 "bdev_split_delete", 00:06:06.815 "bdev_split_create", 00:06:06.815 "bdev_delay_delete", 00:06:06.815 "bdev_delay_create", 00:06:06.815 "bdev_delay_update_latency", 00:06:06.815 "bdev_zone_block_delete", 00:06:06.815 "bdev_zone_block_create", 00:06:06.815 "blobfs_create", 00:06:06.815 "blobfs_detect", 00:06:06.815 "blobfs_set_cache_size", 00:06:06.815 "bdev_aio_delete", 00:06:06.815 "bdev_aio_rescan", 00:06:06.815 "bdev_aio_create", 00:06:06.815 "bdev_ftl_set_property", 00:06:06.815 "bdev_ftl_get_properties", 00:06:06.815 "bdev_ftl_get_stats", 00:06:06.815 "bdev_ftl_unmap", 00:06:06.815 "bdev_ftl_unload", 00:06:06.815 "bdev_ftl_delete", 00:06:06.815 "bdev_ftl_load", 00:06:06.815 "bdev_ftl_create", 00:06:06.815 "bdev_virtio_attach_controller", 00:06:06.815 "bdev_virtio_scsi_get_devices", 00:06:06.815 "bdev_virtio_detach_controller", 00:06:06.815 "bdev_virtio_blk_set_hotplug", 00:06:06.815 "bdev_iscsi_delete", 00:06:06.815 "bdev_iscsi_create", 00:06:06.815 "bdev_iscsi_set_options", 00:06:06.815 "accel_error_inject_error", 00:06:06.815 "ioat_scan_accel_module", 00:06:06.815 "dsa_scan_accel_module", 00:06:06.815 "iaa_scan_accel_module", 00:06:06.815 "keyring_file_remove_key", 00:06:06.815 "keyring_file_add_key", 00:06:06.815 "keyring_linux_set_options", 00:06:06.815 "fsdev_aio_delete", 00:06:06.815 "fsdev_aio_create", 00:06:06.815 "iscsi_get_histogram", 00:06:06.815 "iscsi_enable_histogram", 00:06:06.815 "iscsi_set_options", 00:06:06.815 "iscsi_get_auth_groups", 00:06:06.815 "iscsi_auth_group_remove_secret", 00:06:06.815 "iscsi_auth_group_add_secret", 00:06:06.815 "iscsi_delete_auth_group", 00:06:06.815 "iscsi_create_auth_group", 00:06:06.815 "iscsi_set_discovery_auth", 00:06:06.815 "iscsi_get_options", 00:06:06.815 "iscsi_target_node_request_logout", 00:06:06.815 "iscsi_target_node_set_redirect", 00:06:06.815 "iscsi_target_node_set_auth", 00:06:06.815 "iscsi_target_node_add_lun", 00:06:06.815 "iscsi_get_stats", 00:06:06.815 "iscsi_get_connections", 00:06:06.815 "iscsi_portal_group_set_auth", 00:06:06.815 "iscsi_start_portal_group", 00:06:06.815 "iscsi_delete_portal_group", 00:06:06.815 "iscsi_create_portal_group", 00:06:06.815 "iscsi_get_portal_groups", 00:06:06.815 "iscsi_delete_target_node", 00:06:06.815 "iscsi_target_node_remove_pg_ig_maps", 00:06:06.815 "iscsi_target_node_add_pg_ig_maps", 00:06:06.815 "iscsi_create_target_node", 00:06:06.815 "iscsi_get_target_nodes", 00:06:06.815 "iscsi_delete_initiator_group", 00:06:06.815 "iscsi_initiator_group_remove_initiators", 00:06:06.815 "iscsi_initiator_group_add_initiators", 00:06:06.815 "iscsi_create_initiator_group", 00:06:06.815 "iscsi_get_initiator_groups", 00:06:06.815 "nvmf_set_crdt", 00:06:06.815 "nvmf_set_config", 00:06:06.815 "nvmf_set_max_subsystems", 00:06:06.815 "nvmf_stop_mdns_prr", 00:06:06.815 "nvmf_publish_mdns_prr", 00:06:06.815 "nvmf_subsystem_get_listeners", 00:06:06.815 "nvmf_subsystem_get_qpairs", 00:06:06.815 "nvmf_subsystem_get_controllers", 00:06:06.815 "nvmf_get_stats", 00:06:06.815 "nvmf_get_transports", 00:06:06.815 "nvmf_create_transport", 00:06:06.815 "nvmf_get_targets", 00:06:06.815 "nvmf_delete_target", 00:06:06.815 "nvmf_create_target", 00:06:06.815 "nvmf_subsystem_allow_any_host", 00:06:06.815 "nvmf_subsystem_set_keys", 00:06:06.815 "nvmf_subsystem_remove_host", 00:06:06.815 "nvmf_subsystem_add_host", 00:06:06.815 "nvmf_ns_remove_host", 00:06:06.815 "nvmf_ns_add_host", 00:06:06.815 "nvmf_subsystem_remove_ns", 00:06:06.815 "nvmf_subsystem_set_ns_ana_group", 00:06:06.815 "nvmf_subsystem_add_ns", 00:06:06.815 "nvmf_subsystem_listener_set_ana_state", 00:06:06.815 "nvmf_discovery_get_referrals", 00:06:06.815 "nvmf_discovery_remove_referral", 00:06:06.815 "nvmf_discovery_add_referral", 00:06:06.815 "nvmf_subsystem_remove_listener", 00:06:06.815 "nvmf_subsystem_add_listener", 00:06:06.815 "nvmf_delete_subsystem", 00:06:06.815 "nvmf_create_subsystem", 00:06:06.815 "nvmf_get_subsystems", 00:06:06.815 "env_dpdk_get_mem_stats", 00:06:06.815 "nbd_get_disks", 00:06:06.815 "nbd_stop_disk", 00:06:06.815 "nbd_start_disk", 00:06:06.815 "ublk_recover_disk", 00:06:06.815 "ublk_get_disks", 00:06:06.815 "ublk_stop_disk", 00:06:06.815 "ublk_start_disk", 00:06:06.815 "ublk_destroy_target", 00:06:06.815 "ublk_create_target", 00:06:06.815 "virtio_blk_create_transport", 00:06:06.815 "virtio_blk_get_transports", 00:06:06.815 "vhost_controller_set_coalescing", 00:06:06.815 "vhost_get_controllers", 00:06:06.815 "vhost_delete_controller", 00:06:06.815 "vhost_create_blk_controller", 00:06:06.815 "vhost_scsi_controller_remove_target", 00:06:06.815 "vhost_scsi_controller_add_target", 00:06:06.815 "vhost_start_scsi_controller", 00:06:06.815 "vhost_create_scsi_controller", 00:06:06.815 "thread_set_cpumask", 00:06:06.815 "scheduler_set_options", 00:06:06.815 "framework_get_governor", 00:06:06.815 "framework_get_scheduler", 00:06:06.815 "framework_set_scheduler", 00:06:06.815 "framework_get_reactors", 00:06:06.815 "thread_get_io_channels", 00:06:06.815 "thread_get_pollers", 00:06:06.815 "thread_get_stats", 00:06:06.815 "framework_monitor_context_switch", 00:06:06.815 "spdk_kill_instance", 00:06:06.815 "log_enable_timestamps", 00:06:06.815 "log_get_flags", 00:06:06.815 "log_clear_flag", 00:06:06.815 "log_set_flag", 00:06:06.815 "log_get_level", 00:06:06.815 "log_set_level", 00:06:06.815 "log_get_print_level", 00:06:06.815 "log_set_print_level", 00:06:06.815 "framework_enable_cpumask_locks", 00:06:06.815 "framework_disable_cpumask_locks", 00:06:06.815 "framework_wait_init", 00:06:06.815 "framework_start_init", 00:06:06.815 "scsi_get_devices", 00:06:06.815 "bdev_get_histogram", 00:06:06.815 "bdev_enable_histogram", 00:06:06.815 "bdev_set_qos_limit", 00:06:06.815 "bdev_set_qd_sampling_period", 00:06:06.815 "bdev_get_bdevs", 00:06:06.815 "bdev_reset_iostat", 00:06:06.815 "bdev_get_iostat", 00:06:06.815 "bdev_examine", 00:06:06.815 "bdev_wait_for_examine", 00:06:06.815 "bdev_set_options", 00:06:06.815 "accel_get_stats", 00:06:06.815 "accel_set_options", 00:06:06.815 "accel_set_driver", 00:06:06.815 "accel_crypto_key_destroy", 00:06:06.815 "accel_crypto_keys_get", 00:06:06.815 "accel_crypto_key_create", 00:06:06.815 "accel_assign_opc", 00:06:06.815 "accel_get_module_info", 00:06:06.815 "accel_get_opc_assignments", 00:06:06.815 "vmd_rescan", 00:06:06.815 "vmd_remove_device", 00:06:06.815 "vmd_enable", 00:06:06.815 "sock_get_default_impl", 00:06:06.815 "sock_set_default_impl", 00:06:06.815 "sock_impl_set_options", 00:06:06.815 "sock_impl_get_options", 00:06:06.815 "iobuf_get_stats", 00:06:06.815 "iobuf_set_options", 00:06:06.815 "keyring_get_keys", 00:06:06.815 "framework_get_pci_devices", 00:06:06.816 "framework_get_config", 00:06:06.816 "framework_get_subsystems", 00:06:06.816 "fsdev_set_opts", 00:06:06.816 "fsdev_get_opts", 00:06:06.816 "trace_get_info", 00:06:06.816 "trace_get_tpoint_group_mask", 00:06:06.816 "trace_disable_tpoint_group", 00:06:06.816 "trace_enable_tpoint_group", 00:06:06.816 "trace_clear_tpoint_mask", 00:06:06.816 "trace_set_tpoint_mask", 00:06:06.816 "notify_get_notifications", 00:06:06.816 "notify_get_types", 00:06:06.816 "spdk_get_version", 00:06:06.816 "rpc_get_methods" 00:06:06.816 ] 00:06:06.816 13:19:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.816 13:19:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:06.816 13:19:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69637 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69637 ']' 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69637 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69637 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.816 killing process with pid 69637 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69637' 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69637 00:06:06.816 13:19:48 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69637 00:06:07.074 ************************************ 00:06:07.074 END TEST spdkcli_tcp 00:06:07.074 ************************************ 00:06:07.074 00:06:07.074 real 0m1.763s 00:06:07.074 user 0m2.978s 00:06:07.074 sys 0m0.538s 00:06:07.074 13:19:48 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.074 13:19:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.333 13:19:48 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:07.333 13:19:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.333 13:19:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.333 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:06:07.333 ************************************ 00:06:07.333 START TEST dpdk_mem_utility 00:06:07.333 ************************************ 00:06:07.333 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:07.333 * Looking for test storage... 00:06:07.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:07.333 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.333 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.333 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.333 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.333 13:19:48 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:07.333 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.333 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.333 --rc genhtml_branch_coverage=1 00:06:07.333 --rc genhtml_function_coverage=1 00:06:07.333 --rc genhtml_legend=1 00:06:07.333 --rc geninfo_all_blocks=1 00:06:07.333 --rc geninfo_unexecuted_blocks=1 00:06:07.333 00:06:07.333 ' 00:06:07.333 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.333 --rc genhtml_branch_coverage=1 00:06:07.333 --rc genhtml_function_coverage=1 00:06:07.333 --rc genhtml_legend=1 00:06:07.333 --rc geninfo_all_blocks=1 00:06:07.333 --rc geninfo_unexecuted_blocks=1 00:06:07.333 00:06:07.333 ' 00:06:07.333 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.333 --rc genhtml_branch_coverage=1 00:06:07.333 --rc genhtml_function_coverage=1 00:06:07.333 --rc genhtml_legend=1 00:06:07.333 --rc geninfo_all_blocks=1 00:06:07.333 --rc geninfo_unexecuted_blocks=1 00:06:07.333 00:06:07.333 ' 00:06:07.333 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.334 --rc genhtml_branch_coverage=1 00:06:07.334 --rc genhtml_function_coverage=1 00:06:07.334 --rc genhtml_legend=1 00:06:07.334 --rc geninfo_all_blocks=1 00:06:07.334 --rc geninfo_unexecuted_blocks=1 00:06:07.334 00:06:07.334 ' 00:06:07.334 13:19:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:07.334 13:19:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69737 00:06:07.334 13:19:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:07.334 13:19:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69737 00:06:07.334 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 69737 ']' 00:06:07.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.334 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.334 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.334 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.334 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.334 13:19:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:07.593 [2024-11-20 13:19:49.068348] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:07.593 [2024-11-20 13:19:49.068577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69737 ] 00:06:07.593 [2024-11-20 13:19:49.222276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.593 [2024-11-20 13:19:49.248344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.532 13:19:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.532 13:19:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:08.532 13:19:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:08.532 13:19:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:08.532 13:19:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.532 13:19:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.532 { 00:06:08.532 "filename": "/tmp/spdk_mem_dump.txt" 00:06:08.532 } 00:06:08.532 13:19:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.532 13:19:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.532 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:08.532 1 heaps totaling size 810.000000 MiB 00:06:08.532 size: 810.000000 MiB heap id: 0 00:06:08.532 end heaps---------- 00:06:08.532 9 mempools totaling size 595.772034 MiB 00:06:08.532 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:08.532 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:08.532 size: 92.545471 MiB name: bdev_io_69737 00:06:08.532 size: 50.003479 MiB name: msgpool_69737 00:06:08.532 size: 36.509338 MiB name: fsdev_io_69737 00:06:08.532 size: 21.763794 MiB name: PDU_Pool 00:06:08.532 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:08.532 size: 4.133484 MiB name: evtpool_69737 00:06:08.532 size: 0.026123 MiB name: Session_Pool 00:06:08.532 end mempools------- 00:06:08.532 6 memzones totaling size 4.142822 MiB 00:06:08.532 size: 1.000366 MiB name: RG_ring_0_69737 00:06:08.532 size: 1.000366 MiB name: RG_ring_1_69737 00:06:08.532 size: 1.000366 MiB name: RG_ring_4_69737 00:06:08.532 size: 1.000366 MiB name: RG_ring_5_69737 00:06:08.532 size: 0.125366 MiB name: RG_ring_2_69737 00:06:08.532 size: 0.015991 MiB name: RG_ring_3_69737 00:06:08.532 end memzones------- 00:06:08.532 13:19:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:08.532 heap id: 0 total size: 810.000000 MiB number of busy elements: 311 number of free elements: 15 00:06:08.532 list of free elements. size: 10.813599 MiB 00:06:08.532 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:08.532 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:08.532 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:08.532 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:08.532 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:08.532 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:08.532 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:08.532 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:08.532 element at address: 0x20001a600000 with size: 0.567322 MiB 00:06:08.532 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:08.532 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:08.532 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:08.532 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:08.532 element at address: 0x200027a00000 with size: 0.396484 MiB 00:06:08.532 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:08.532 list of standard malloc elements. size: 199.267517 MiB 00:06:08.532 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:08.532 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:08.533 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:08.533 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:08.533 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:08.533 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:08.533 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:08.533 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:08.533 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:08.533 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:08.533 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:08.533 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:08.534 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:08.534 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691480 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691540 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691600 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:08.534 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200027a65800 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200027a658c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200027a6c4c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:08.534 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:08.535 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:08.535 list of memzone associated elements. size: 599.918884 MiB 00:06:08.535 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:08.535 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:08.535 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:08.535 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:08.535 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:08.535 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69737_0 00:06:08.535 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:08.535 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69737_0 00:06:08.535 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:08.535 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69737_0 00:06:08.535 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:08.535 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:08.535 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:08.535 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:08.535 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:08.535 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69737_0 00:06:08.535 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:08.535 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69737 00:06:08.535 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:08.535 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69737 00:06:08.535 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:08.535 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:08.535 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:08.535 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:08.535 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:08.535 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:08.535 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:08.535 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:08.535 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:08.535 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69737 00:06:08.535 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:08.535 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69737 00:06:08.535 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:08.535 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69737 00:06:08.535 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:08.535 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69737 00:06:08.535 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:08.535 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69737 00:06:08.535 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:08.535 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69737 00:06:08.535 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:08.535 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:08.535 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:08.535 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:08.535 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:08.535 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:08.535 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:08.535 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69737 00:06:08.535 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:08.535 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69737 00:06:08.535 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:08.535 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:08.535 element at address: 0x200027a65980 with size: 0.023743 MiB 00:06:08.536 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:08.536 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:08.536 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69737 00:06:08.536 element at address: 0x200027a6bac0 with size: 0.002441 MiB 00:06:08.536 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:08.536 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:08.536 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69737 00:06:08.536 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:08.536 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69737 00:06:08.536 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:08.536 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69737 00:06:08.536 element at address: 0x200027a6c580 with size: 0.000305 MiB 00:06:08.536 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:08.536 13:19:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:08.536 13:19:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69737 00:06:08.536 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 69737 ']' 00:06:08.536 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 69737 00:06:08.536 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:08.536 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.536 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69737 00:06:08.536 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.536 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.536 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69737' 00:06:08.536 killing process with pid 69737 00:06:08.536 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 69737 00:06:08.536 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 69737 00:06:08.795 ************************************ 00:06:08.795 END TEST dpdk_mem_utility 00:06:08.795 ************************************ 00:06:08.795 00:06:08.795 real 0m1.649s 00:06:08.795 user 0m1.609s 00:06:08.795 sys 0m0.485s 00:06:08.795 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.795 13:19:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.795 13:19:50 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:08.795 13:19:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.795 13:19:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.795 13:19:50 -- common/autotest_common.sh@10 -- # set +x 00:06:09.054 ************************************ 00:06:09.054 START TEST event 00:06:09.054 ************************************ 00:06:09.054 13:19:50 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.054 * Looking for test storage... 00:06:09.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:09.054 13:19:50 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.054 13:19:50 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.054 13:19:50 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.054 13:19:50 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.054 13:19:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.054 13:19:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.054 13:19:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.054 13:19:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.055 13:19:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.055 13:19:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.055 13:19:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.055 13:19:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.055 13:19:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.055 13:19:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.055 13:19:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.055 13:19:50 event -- scripts/common.sh@344 -- # case "$op" in 00:06:09.055 13:19:50 event -- scripts/common.sh@345 -- # : 1 00:06:09.055 13:19:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.055 13:19:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.055 13:19:50 event -- scripts/common.sh@365 -- # decimal 1 00:06:09.055 13:19:50 event -- scripts/common.sh@353 -- # local d=1 00:06:09.055 13:19:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.055 13:19:50 event -- scripts/common.sh@355 -- # echo 1 00:06:09.055 13:19:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.055 13:19:50 event -- scripts/common.sh@366 -- # decimal 2 00:06:09.055 13:19:50 event -- scripts/common.sh@353 -- # local d=2 00:06:09.055 13:19:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.055 13:19:50 event -- scripts/common.sh@355 -- # echo 2 00:06:09.055 13:19:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.055 13:19:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.055 13:19:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.055 13:19:50 event -- scripts/common.sh@368 -- # return 0 00:06:09.055 13:19:50 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.055 13:19:50 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.055 --rc genhtml_branch_coverage=1 00:06:09.055 --rc genhtml_function_coverage=1 00:06:09.055 --rc genhtml_legend=1 00:06:09.055 --rc geninfo_all_blocks=1 00:06:09.055 --rc geninfo_unexecuted_blocks=1 00:06:09.055 00:06:09.055 ' 00:06:09.055 13:19:50 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.055 --rc genhtml_branch_coverage=1 00:06:09.055 --rc genhtml_function_coverage=1 00:06:09.055 --rc genhtml_legend=1 00:06:09.055 --rc geninfo_all_blocks=1 00:06:09.055 --rc geninfo_unexecuted_blocks=1 00:06:09.055 00:06:09.055 ' 00:06:09.055 13:19:50 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.055 --rc genhtml_branch_coverage=1 00:06:09.055 --rc genhtml_function_coverage=1 00:06:09.055 --rc genhtml_legend=1 00:06:09.055 --rc geninfo_all_blocks=1 00:06:09.055 --rc geninfo_unexecuted_blocks=1 00:06:09.055 00:06:09.055 ' 00:06:09.055 13:19:50 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.055 --rc genhtml_branch_coverage=1 00:06:09.055 --rc genhtml_function_coverage=1 00:06:09.055 --rc genhtml_legend=1 00:06:09.055 --rc geninfo_all_blocks=1 00:06:09.055 --rc geninfo_unexecuted_blocks=1 00:06:09.055 00:06:09.055 ' 00:06:09.055 13:19:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:09.055 13:19:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:09.055 13:19:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.055 13:19:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:09.055 13:19:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.055 13:19:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.055 ************************************ 00:06:09.055 START TEST event_perf 00:06:09.055 ************************************ 00:06:09.055 13:19:50 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.315 Running I/O for 1 seconds...[2024-11-20 13:19:50.747646] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:09.315 [2024-11-20 13:19:50.747789] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69812 ] 00:06:09.315 [2024-11-20 13:19:50.904788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.315 [2024-11-20 13:19:50.934896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.315 [2024-11-20 13:19:50.935164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.315 [2024-11-20 13:19:50.935306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.315 Running I/O for 1 seconds...[2024-11-20 13:19:50.935132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.723 00:06:10.723 lcore 0: 210931 00:06:10.723 lcore 1: 210930 00:06:10.723 lcore 2: 210930 00:06:10.723 lcore 3: 210930 00:06:10.723 done. 00:06:10.723 00:06:10.723 real 0m1.294s 00:06:10.723 user 0m4.081s 00:06:10.723 sys 0m0.094s 00:06:10.723 13:19:52 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.723 13:19:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.723 ************************************ 00:06:10.723 END TEST event_perf 00:06:10.723 ************************************ 00:06:10.723 13:19:52 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:10.723 13:19:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:10.723 13:19:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.723 13:19:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.723 ************************************ 00:06:10.723 START TEST event_reactor 00:06:10.723 ************************************ 00:06:10.723 13:19:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:10.723 [2024-11-20 13:19:52.103175] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:10.723 [2024-11-20 13:19:52.103311] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69856 ] 00:06:10.723 [2024-11-20 13:19:52.250388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.723 [2024-11-20 13:19:52.277825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.098 test_start 00:06:12.098 oneshot 00:06:12.098 tick 100 00:06:12.098 tick 100 00:06:12.098 tick 250 00:06:12.098 tick 100 00:06:12.098 tick 100 00:06:12.098 tick 100 00:06:12.098 tick 250 00:06:12.098 tick 500 00:06:12.098 tick 100 00:06:12.098 tick 100 00:06:12.098 tick 250 00:06:12.098 tick 100 00:06:12.098 tick 100 00:06:12.098 test_end 00:06:12.098 00:06:12.098 real 0m1.273s 00:06:12.098 user 0m1.094s 00:06:12.098 sys 0m0.072s 00:06:12.098 13:19:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.098 13:19:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:12.098 ************************************ 00:06:12.098 END TEST event_reactor 00:06:12.098 ************************************ 00:06:12.099 13:19:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.099 13:19:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:12.099 13:19:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.099 13:19:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.099 ************************************ 00:06:12.099 START TEST event_reactor_perf 00:06:12.099 ************************************ 00:06:12.099 13:19:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.099 [2024-11-20 13:19:53.447567] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:12.099 [2024-11-20 13:19:53.447766] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69888 ] 00:06:12.099 [2024-11-20 13:19:53.590596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.099 [2024-11-20 13:19:53.616356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.038 test_start 00:06:13.038 test_end 00:06:13.038 Performance: 389203 events per second 00:06:13.038 00:06:13.038 real 0m1.272s 00:06:13.039 user 0m1.100s 00:06:13.039 sys 0m0.065s 00:06:13.039 13:19:54 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.039 13:19:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.039 ************************************ 00:06:13.039 END TEST event_reactor_perf 00:06:13.039 ************************************ 00:06:13.296 13:19:54 event -- event/event.sh@49 -- # uname -s 00:06:13.296 13:19:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:13.296 13:19:54 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:13.296 13:19:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.296 13:19:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.296 13:19:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.296 ************************************ 00:06:13.296 START TEST event_scheduler 00:06:13.296 ************************************ 00:06:13.296 13:19:54 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:13.296 * Looking for test storage... 00:06:13.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:13.296 13:19:54 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.296 13:19:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.296 13:19:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.296 13:19:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.296 13:19:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.296 13:19:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.296 13:19:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.296 13:19:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.296 13:19:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.296 13:19:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.296 13:19:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.296 13:19:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.296 13:19:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.296 13:19:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.297 13:19:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.297 13:19:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:13.297 13:19:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:13.297 13:19:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.297 13:19:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.556 13:19:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:13.556 13:19:54 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.556 13:19:54 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.556 --rc genhtml_branch_coverage=1 00:06:13.556 --rc genhtml_function_coverage=1 00:06:13.556 --rc genhtml_legend=1 00:06:13.556 --rc geninfo_all_blocks=1 00:06:13.556 --rc geninfo_unexecuted_blocks=1 00:06:13.556 00:06:13.556 ' 00:06:13.556 13:19:54 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.556 --rc genhtml_branch_coverage=1 00:06:13.556 --rc genhtml_function_coverage=1 00:06:13.556 --rc genhtml_legend=1 00:06:13.556 --rc geninfo_all_blocks=1 00:06:13.556 --rc geninfo_unexecuted_blocks=1 00:06:13.556 00:06:13.556 ' 00:06:13.556 13:19:54 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.556 --rc genhtml_branch_coverage=1 00:06:13.556 --rc genhtml_function_coverage=1 00:06:13.556 --rc genhtml_legend=1 00:06:13.556 --rc geninfo_all_blocks=1 00:06:13.556 --rc geninfo_unexecuted_blocks=1 00:06:13.556 00:06:13.556 ' 00:06:13.556 13:19:54 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.556 --rc genhtml_branch_coverage=1 00:06:13.556 --rc genhtml_function_coverage=1 00:06:13.556 --rc genhtml_legend=1 00:06:13.556 --rc geninfo_all_blocks=1 00:06:13.556 --rc geninfo_unexecuted_blocks=1 00:06:13.556 00:06:13.556 ' 00:06:13.556 13:19:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:13.556 13:19:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:13.556 13:19:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=69953 00:06:13.556 13:19:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.556 13:19:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 69953 00:06:13.556 13:19:54 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 69953 ']' 00:06:13.556 13:19:54 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.556 13:19:54 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.556 13:19:54 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.556 13:19:54 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.556 13:19:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.556 [2024-11-20 13:19:55.049813] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:13.556 [2024-11-20 13:19:55.050042] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69953 ] 00:06:13.556 [2024-11-20 13:19:55.209013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.816 [2024-11-20 13:19:55.239657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.816 [2024-11-20 13:19:55.240214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.816 [2024-11-20 13:19:55.240310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.816 [2024-11-20 13:19:55.240205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.383 13:19:55 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.383 13:19:55 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:14.383 13:19:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:14.383 13:19:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.383 13:19:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.383 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:14.383 POWER: Cannot set governor of lcore 0 to userspace 00:06:14.383 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:14.383 POWER: Cannot set governor of lcore 0 to performance 00:06:14.383 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:14.383 POWER: Cannot set governor of lcore 0 to userspace 00:06:14.383 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:14.383 POWER: Unable to set Power Management Environment for lcore 0 00:06:14.383 [2024-11-20 13:19:55.901288] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:14.383 [2024-11-20 13:19:55.901342] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:14.383 [2024-11-20 13:19:55.901404] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:14.383 [2024-11-20 13:19:55.901466] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:14.383 [2024-11-20 13:19:55.901507] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:14.383 [2024-11-20 13:19:55.901568] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:14.383 13:19:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.383 13:19:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:14.383 13:19:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.383 13:19:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.383 [2024-11-20 13:19:55.976372] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:14.383 13:19:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.383 13:19:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:14.383 13:19:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.383 13:19:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.383 13:19:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.383 ************************************ 00:06:14.383 START TEST scheduler_create_thread 00:06:14.383 ************************************ 00:06:14.383 13:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:14.383 13:19:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:14.383 13:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.383 13:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.383 2 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.383 3 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.383 4 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.383 5 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.383 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.642 6 00:06:14.642 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.642 13:19:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:14.642 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.642 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.642 7 00:06:14.642 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.642 13:19:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:14.642 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.642 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.642 8 00:06:14.642 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.643 13:19:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:14.643 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.643 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.643 9 00:06:14.643 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.643 13:19:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:14.643 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.643 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.903 10 00:06:14.903 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.903 13:19:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:14.903 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.903 13:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.274 13:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.274 13:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:16.274 13:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:16.274 13:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.274 13:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.209 13:19:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.209 13:19:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:17.209 13:19:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.209 13:19:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.824 13:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.824 13:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:17.824 13:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:17.824 13:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.824 13:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.781 ************************************ 00:06:18.781 END TEST scheduler_create_thread 00:06:18.781 ************************************ 00:06:18.781 13:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.781 00:06:18.781 real 0m4.212s 00:06:18.781 user 0m0.025s 00:06:18.781 sys 0m0.010s 00:06:18.781 13:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.781 13:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.781 13:20:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:18.781 13:20:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 69953 00:06:18.781 13:20:00 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 69953 ']' 00:06:18.781 13:20:00 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 69953 00:06:18.781 13:20:00 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:18.781 13:20:00 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.781 13:20:00 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69953 00:06:18.781 killing process with pid 69953 00:06:18.781 13:20:00 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:18.781 13:20:00 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:18.781 13:20:00 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69953' 00:06:18.781 13:20:00 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 69953 00:06:18.781 13:20:00 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 69953 00:06:19.043 [2024-11-20 13:20:00.480741] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:19.304 ************************************ 00:06:19.304 END TEST event_scheduler 00:06:19.304 ************************************ 00:06:19.304 00:06:19.304 real 0m5.993s 00:06:19.304 user 0m13.018s 00:06:19.304 sys 0m0.459s 00:06:19.304 13:20:00 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.304 13:20:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.304 13:20:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:19.304 13:20:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:19.304 13:20:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.304 13:20:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.304 13:20:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.304 ************************************ 00:06:19.304 START TEST app_repeat 00:06:19.304 ************************************ 00:06:19.304 13:20:00 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70070 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.304 Process app_repeat pid: 70070 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70070' 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:19.304 spdk_app_start Round 0 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:19.304 13:20:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70070 /var/tmp/spdk-nbd.sock 00:06:19.305 13:20:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70070 ']' 00:06:19.305 13:20:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.305 13:20:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.305 13:20:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.305 13:20:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.305 13:20:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.305 [2024-11-20 13:20:00.877691] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:19.305 [2024-11-20 13:20:00.877897] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70070 ] 00:06:19.564 [2024-11-20 13:20:01.012884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.564 [2024-11-20 13:20:01.038863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.564 [2024-11-20 13:20:01.038976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.133 13:20:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.133 13:20:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:20.133 13:20:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.394 Malloc0 00:06:20.394 13:20:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.654 Malloc1 00:06:20.654 13:20:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.654 13:20:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.914 /dev/nbd0 00:06:20.914 13:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.914 13:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.914 1+0 records in 00:06:20.914 1+0 records out 00:06:20.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170968 s, 24.0 MB/s 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.914 13:20:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:20.914 13:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.914 13:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.914 13:20:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.173 /dev/nbd1 00:06:21.173 13:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.173 13:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.173 1+0 records in 00:06:21.173 1+0 records out 00:06:21.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032788 s, 12.5 MB/s 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.173 13:20:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.173 13:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.173 13:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.173 13:20:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.173 13:20:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.173 13:20:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.432 { 00:06:21.432 "nbd_device": "/dev/nbd0", 00:06:21.432 "bdev_name": "Malloc0" 00:06:21.432 }, 00:06:21.432 { 00:06:21.432 "nbd_device": "/dev/nbd1", 00:06:21.432 "bdev_name": "Malloc1" 00:06:21.432 } 00:06:21.432 ]' 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.432 { 00:06:21.432 "nbd_device": "/dev/nbd0", 00:06:21.432 "bdev_name": "Malloc0" 00:06:21.432 }, 00:06:21.432 { 00:06:21.432 "nbd_device": "/dev/nbd1", 00:06:21.432 "bdev_name": "Malloc1" 00:06:21.432 } 00:06:21.432 ]' 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.432 /dev/nbd1' 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.432 /dev/nbd1' 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.432 256+0 records in 00:06:21.432 256+0 records out 00:06:21.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136382 s, 76.9 MB/s 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.432 256+0 records in 00:06:21.432 256+0 records out 00:06:21.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162566 s, 64.5 MB/s 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.432 256+0 records in 00:06:21.432 256+0 records out 00:06:21.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203995 s, 51.4 MB/s 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.432 13:20:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.432 13:20:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.432 13:20:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.432 13:20:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.432 13:20:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.432 13:20:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.432 13:20:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.432 13:20:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.432 13:20:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.432 13:20:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.432 13:20:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.690 13:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.690 13:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.690 13:20:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.690 13:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.690 13:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.690 13:20:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.690 13:20:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.690 13:20:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.690 13:20:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.690 13:20:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.950 13:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.950 13:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.950 13:20:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.950 13:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.950 13:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.950 13:20:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.950 13:20:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.950 13:20:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.950 13:20:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.950 13:20:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.950 13:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.209 13:20:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.209 13:20:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.467 13:20:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.467 [2024-11-20 13:20:04.085730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.467 [2024-11-20 13:20:04.108781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.467 [2024-11-20 13:20:04.108785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.725 [2024-11-20 13:20:04.151202] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.725 [2024-11-20 13:20:04.151298] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.014 13:20:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.014 spdk_app_start Round 1 00:06:26.014 13:20:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:26.014 13:20:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70070 /var/tmp/spdk-nbd.sock 00:06:26.014 13:20:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70070 ']' 00:06:26.014 13:20:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.014 13:20:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.014 13:20:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.014 13:20:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.014 13:20:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.014 13:20:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.014 13:20:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:26.014 13:20:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.014 Malloc0 00:06:26.014 13:20:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.014 Malloc1 00:06:26.014 13:20:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.014 13:20:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.300 /dev/nbd0 00:06:26.300 13:20:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.300 13:20:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.300 1+0 records in 00:06:26.300 1+0 records out 00:06:26.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394784 s, 10.4 MB/s 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.300 13:20:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:26.300 13:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.300 13:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.300 13:20:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.560 /dev/nbd1 00:06:26.560 13:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.560 13:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.560 1+0 records in 00:06:26.560 1+0 records out 00:06:26.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384215 s, 10.7 MB/s 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.560 13:20:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:26.560 13:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.560 13:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.560 13:20:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.560 13:20:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.560 13:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.819 13:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.819 { 00:06:26.819 "nbd_device": "/dev/nbd0", 00:06:26.820 "bdev_name": "Malloc0" 00:06:26.820 }, 00:06:26.820 { 00:06:26.820 "nbd_device": "/dev/nbd1", 00:06:26.820 "bdev_name": "Malloc1" 00:06:26.820 } 00:06:26.820 ]' 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.820 { 00:06:26.820 "nbd_device": "/dev/nbd0", 00:06:26.820 "bdev_name": "Malloc0" 00:06:26.820 }, 00:06:26.820 { 00:06:26.820 "nbd_device": "/dev/nbd1", 00:06:26.820 "bdev_name": "Malloc1" 00:06:26.820 } 00:06:26.820 ]' 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.820 /dev/nbd1' 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.820 /dev/nbd1' 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.820 256+0 records in 00:06:26.820 256+0 records out 00:06:26.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135537 s, 77.4 MB/s 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.820 256+0 records in 00:06:26.820 256+0 records out 00:06:26.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210187 s, 49.9 MB/s 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.820 256+0 records in 00:06:26.820 256+0 records out 00:06:26.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201225 s, 52.1 MB/s 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.820 13:20:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.080 13:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.080 13:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.080 13:20:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.080 13:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.080 13:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.080 13:20:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.080 13:20:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.080 13:20:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.080 13:20:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.080 13:20:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.340 13:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.340 13:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.340 13:20:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.340 13:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.340 13:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.340 13:20:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.340 13:20:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.340 13:20:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.340 13:20:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.340 13:20:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.340 13:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.600 13:20:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.600 13:20:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.858 13:20:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.116 [2024-11-20 13:20:09.531921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.116 [2024-11-20 13:20:09.555315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.116 [2024-11-20 13:20:09.555337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.116 [2024-11-20 13:20:09.597882] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.116 [2024-11-20 13:20:09.597962] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.407 13:20:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.407 spdk_app_start Round 2 00:06:31.407 13:20:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:31.407 13:20:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70070 /var/tmp/spdk-nbd.sock 00:06:31.407 13:20:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70070 ']' 00:06:31.407 13:20:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.407 13:20:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.407 13:20:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.407 13:20:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.407 13:20:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.407 13:20:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.407 13:20:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:31.407 13:20:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.407 Malloc0 00:06:31.407 13:20:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.407 Malloc1 00:06:31.407 13:20:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.407 13:20:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.666 /dev/nbd0 00:06:31.666 13:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.666 13:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.666 13:20:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:31.666 13:20:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:31.666 13:20:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.666 13:20:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.666 13:20:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:31.666 13:20:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:31.667 13:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.667 13:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.667 13:20:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.667 1+0 records in 00:06:31.667 1+0 records out 00:06:31.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396051 s, 10.3 MB/s 00:06:31.667 13:20:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.667 13:20:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:31.667 13:20:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.667 13:20:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.667 13:20:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:31.667 13:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.667 13:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.667 13:20:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.927 /dev/nbd1 00:06:31.927 13:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.928 13:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.928 1+0 records in 00:06:31.928 1+0 records out 00:06:31.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429778 s, 9.5 MB/s 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.928 13:20:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:31.928 13:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.928 13:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.928 13:20:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.928 13:20:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.928 13:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.187 { 00:06:32.187 "nbd_device": "/dev/nbd0", 00:06:32.187 "bdev_name": "Malloc0" 00:06:32.187 }, 00:06:32.187 { 00:06:32.187 "nbd_device": "/dev/nbd1", 00:06:32.187 "bdev_name": "Malloc1" 00:06:32.187 } 00:06:32.187 ]' 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.187 { 00:06:32.187 "nbd_device": "/dev/nbd0", 00:06:32.187 "bdev_name": "Malloc0" 00:06:32.187 }, 00:06:32.187 { 00:06:32.187 "nbd_device": "/dev/nbd1", 00:06:32.187 "bdev_name": "Malloc1" 00:06:32.187 } 00:06:32.187 ]' 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.187 /dev/nbd1' 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.187 /dev/nbd1' 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.187 13:20:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.188 13:20:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.188 256+0 records in 00:06:32.188 256+0 records out 00:06:32.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125909 s, 83.3 MB/s 00:06:32.188 13:20:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.188 13:20:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.447 256+0 records in 00:06:32.447 256+0 records out 00:06:32.447 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254156 s, 41.3 MB/s 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.447 256+0 records in 00:06:32.447 256+0 records out 00:06:32.447 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024491 s, 42.8 MB/s 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.447 13:20:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.447 13:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.447 13:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.447 13:20:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.447 13:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.447 13:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.447 13:20:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.447 13:20:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.447 13:20:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.447 13:20:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.447 13:20:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.706 13:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.707 13:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.707 13:20:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.707 13:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.707 13:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.707 13:20:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.707 13:20:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.707 13:20:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.707 13:20:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.707 13:20:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.707 13:20:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.004 13:20:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.004 13:20:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.263 13:20:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.524 [2024-11-20 13:20:14.966805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.524 [2024-11-20 13:20:14.990474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.524 [2024-11-20 13:20:14.990479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.524 [2024-11-20 13:20:15.032810] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.524 [2024-11-20 13:20:15.032874] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.815 13:20:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70070 /var/tmp/spdk-nbd.sock 00:06:36.815 13:20:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70070 ']' 00:06:36.815 13:20:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.815 13:20:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.815 13:20:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.815 13:20:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.815 13:20:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:36.815 13:20:18 event.app_repeat -- event/event.sh@39 -- # killprocess 70070 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70070 ']' 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70070 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70070 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.815 killing process with pid 70070 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70070' 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70070 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70070 00:06:36.815 spdk_app_start is called in Round 0. 00:06:36.815 Shutdown signal received, stop current app iteration 00:06:36.815 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 reinitialization... 00:06:36.815 spdk_app_start is called in Round 1. 00:06:36.815 Shutdown signal received, stop current app iteration 00:06:36.815 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 reinitialization... 00:06:36.815 spdk_app_start is called in Round 2. 00:06:36.815 Shutdown signal received, stop current app iteration 00:06:36.815 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 reinitialization... 00:06:36.815 spdk_app_start is called in Round 3. 00:06:36.815 Shutdown signal received, stop current app iteration 00:06:36.815 13:20:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:36.815 13:20:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:36.815 00:06:36.815 real 0m17.443s 00:06:36.815 user 0m38.661s 00:06:36.815 sys 0m2.620s 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.815 13:20:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.815 ************************************ 00:06:36.815 END TEST app_repeat 00:06:36.815 ************************************ 00:06:36.815 13:20:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:36.815 13:20:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:36.815 13:20:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.815 13:20:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.815 13:20:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.815 ************************************ 00:06:36.815 START TEST cpu_locks 00:06:36.815 ************************************ 00:06:36.815 13:20:18 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:36.815 * Looking for test storage... 00:06:36.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:36.815 13:20:18 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.815 13:20:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.815 13:20:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.076 13:20:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.076 13:20:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:37.076 13:20:18 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.076 13:20:18 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.076 --rc genhtml_branch_coverage=1 00:06:37.076 --rc genhtml_function_coverage=1 00:06:37.076 --rc genhtml_legend=1 00:06:37.076 --rc geninfo_all_blocks=1 00:06:37.076 --rc geninfo_unexecuted_blocks=1 00:06:37.076 00:06:37.076 ' 00:06:37.076 13:20:18 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.076 --rc genhtml_branch_coverage=1 00:06:37.076 --rc genhtml_function_coverage=1 00:06:37.076 --rc genhtml_legend=1 00:06:37.076 --rc geninfo_all_blocks=1 00:06:37.076 --rc geninfo_unexecuted_blocks=1 00:06:37.076 00:06:37.076 ' 00:06:37.076 13:20:18 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.076 --rc genhtml_branch_coverage=1 00:06:37.076 --rc genhtml_function_coverage=1 00:06:37.076 --rc genhtml_legend=1 00:06:37.076 --rc geninfo_all_blocks=1 00:06:37.076 --rc geninfo_unexecuted_blocks=1 00:06:37.076 00:06:37.076 ' 00:06:37.076 13:20:18 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.076 --rc genhtml_branch_coverage=1 00:06:37.076 --rc genhtml_function_coverage=1 00:06:37.076 --rc genhtml_legend=1 00:06:37.076 --rc geninfo_all_blocks=1 00:06:37.076 --rc geninfo_unexecuted_blocks=1 00:06:37.076 00:06:37.076 ' 00:06:37.076 13:20:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:37.076 13:20:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:37.076 13:20:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:37.076 13:20:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:37.076 13:20:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.076 13:20:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.076 13:20:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.076 ************************************ 00:06:37.076 START TEST default_locks 00:06:37.076 ************************************ 00:06:37.076 13:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:37.076 13:20:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70495 00:06:37.076 13:20:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.076 13:20:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70495 00:06:37.076 13:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70495 ']' 00:06:37.077 13:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.077 13:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.077 13:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.077 13:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.077 13:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.077 [2024-11-20 13:20:18.659172] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:37.077 [2024-11-20 13:20:18.659781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70495 ] 00:06:37.336 [2024-11-20 13:20:18.814880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.336 [2024-11-20 13:20:18.842016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.905 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.905 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:37.905 13:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70495 00:06:37.905 13:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.905 13:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70495 00:06:38.475 13:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70495 00:06:38.475 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70495 ']' 00:06:38.475 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70495 00:06:38.475 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:38.475 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.475 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70495 00:06:38.475 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.475 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.475 killing process with pid 70495 00:06:38.475 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70495' 00:06:38.475 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70495 00:06:38.475 13:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70495 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70495 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70495 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70495 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70495 ']' 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.735 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70495) - No such process 00:06:38.735 ERROR: process (pid: 70495) is no longer running 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.735 00:06:38.735 real 0m1.686s 00:06:38.735 user 0m1.669s 00:06:38.735 sys 0m0.577s 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.735 13:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.735 ************************************ 00:06:38.735 END TEST default_locks 00:06:38.735 ************************************ 00:06:38.735 13:20:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:38.735 13:20:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.735 13:20:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.735 13:20:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.735 ************************************ 00:06:38.735 START TEST default_locks_via_rpc 00:06:38.735 ************************************ 00:06:38.735 13:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:38.735 13:20:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70543 00:06:38.735 13:20:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.735 13:20:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70543 00:06:38.735 13:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70543 ']' 00:06:38.735 13:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.735 13:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.735 13:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.735 13:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.735 13:20:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.995 [2024-11-20 13:20:20.419195] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:38.995 [2024-11-20 13:20:20.419350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70543 ] 00:06:38.995 [2024-11-20 13:20:20.558844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.995 [2024-11-20 13:20:20.583722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.564 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.564 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.564 13:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:39.564 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.564 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70543 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.829 13:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70543 00:06:40.090 13:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70543 00:06:40.090 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70543 ']' 00:06:40.090 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70543 00:06:40.090 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:40.090 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.090 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70543 00:06:40.090 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.090 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.090 killing process with pid 70543 00:06:40.090 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70543' 00:06:40.090 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70543 00:06:40.090 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70543 00:06:40.349 00:06:40.349 real 0m1.571s 00:06:40.349 user 0m1.538s 00:06:40.349 sys 0m0.527s 00:06:40.349 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.349 13:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.349 ************************************ 00:06:40.349 END TEST default_locks_via_rpc 00:06:40.349 ************************************ 00:06:40.349 13:20:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:40.349 13:20:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.349 13:20:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.349 13:20:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.349 ************************************ 00:06:40.349 START TEST non_locking_app_on_locked_coremask 00:06:40.349 ************************************ 00:06:40.349 13:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:40.349 13:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70595 00:06:40.349 13:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.349 13:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70595 /var/tmp/spdk.sock 00:06:40.349 13:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70595 ']' 00:06:40.349 13:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.349 13:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.349 13:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.349 13:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.349 13:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.609 [2024-11-20 13:20:22.050747] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:40.609 [2024-11-20 13:20:22.050878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70595 ] 00:06:40.609 [2024-11-20 13:20:22.183225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.609 [2024-11-20 13:20:22.211239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.869 13:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.869 13:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.869 13:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70598 00:06:40.869 13:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70598 /var/tmp/spdk2.sock 00:06:40.869 13:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:40.869 13:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70598 ']' 00:06:40.869 13:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.869 13:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.869 13:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.869 13:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.869 13:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.869 [2024-11-20 13:20:22.521848] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:40.869 [2024-11-20 13:20:22.521985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70598 ] 00:06:41.129 [2024-11-20 13:20:22.680110] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.129 [2024-11-20 13:20:22.680179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.129 [2024-11-20 13:20:22.730353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.071 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.071 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.071 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70595 00:06:42.071 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70595 00:06:42.071 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.331 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70595 00:06:42.331 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70595 ']' 00:06:42.331 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70595 00:06:42.331 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.331 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.331 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70595 00:06:42.331 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.331 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.331 killing process with pid 70595 00:06:42.331 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70595' 00:06:42.331 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70595 00:06:42.331 13:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70595 00:06:43.269 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70598 00:06:43.269 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70598 ']' 00:06:43.269 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70598 00:06:43.269 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.269 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.269 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70598 00:06:43.269 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.269 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.269 killing process with pid 70598 00:06:43.269 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70598' 00:06:43.269 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70598 00:06:43.269 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70598 00:06:43.529 00:06:43.529 real 0m3.040s 00:06:43.529 user 0m3.171s 00:06:43.529 sys 0m1.018s 00:06:43.529 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.529 13:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.529 ************************************ 00:06:43.529 END TEST non_locking_app_on_locked_coremask 00:06:43.529 ************************************ 00:06:43.529 13:20:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:43.529 13:20:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.529 13:20:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.529 13:20:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.529 ************************************ 00:06:43.529 START TEST locking_app_on_unlocked_coremask 00:06:43.529 ************************************ 00:06:43.529 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:43.529 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70669 00:06:43.529 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:43.529 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70669 /var/tmp/spdk.sock 00:06:43.529 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70669 ']' 00:06:43.529 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.529 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.529 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.529 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.529 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.529 [2024-11-20 13:20:25.155901] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:43.529 [2024-11-20 13:20:25.156046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70669 ] 00:06:43.790 [2024-11-20 13:20:25.310153] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.790 [2024-11-20 13:20:25.310211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.790 [2024-11-20 13:20:25.336022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.362 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.362 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.362 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70679 00:06:44.362 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70679 /var/tmp/spdk2.sock 00:06:44.362 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:44.362 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70679 ']' 00:06:44.362 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.362 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.362 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.362 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.362 13:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.622 [2024-11-20 13:20:26.065254] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:44.622 [2024-11-20 13:20:26.065420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70679 ] 00:06:44.622 [2024-11-20 13:20:26.214393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.622 [2024-11-20 13:20:26.271807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.563 13:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.563 13:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.563 13:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70679 00:06:45.563 13:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70679 00:06:45.563 13:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.823 13:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70669 00:06:45.823 13:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70669 ']' 00:06:45.823 13:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70669 00:06:45.823 13:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.823 13:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.823 13:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70669 00:06:45.823 13:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.823 13:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.823 killing process with pid 70669 00:06:45.823 13:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70669' 00:06:45.823 13:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70669 00:06:45.823 13:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70669 00:06:46.393 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70679 00:06:46.393 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70679 ']' 00:06:46.393 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70679 00:06:46.393 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.393 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.393 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70679 00:06:46.653 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.653 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.653 killing process with pid 70679 00:06:46.653 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70679' 00:06:46.653 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70679 00:06:46.653 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70679 00:06:46.914 00:06:46.914 real 0m3.383s 00:06:46.914 user 0m3.608s 00:06:46.914 sys 0m0.981s 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.914 ************************************ 00:06:46.914 END TEST locking_app_on_unlocked_coremask 00:06:46.914 ************************************ 00:06:46.914 13:20:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:46.914 13:20:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.914 13:20:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.914 13:20:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.914 ************************************ 00:06:46.914 START TEST locking_app_on_locked_coremask 00:06:46.914 ************************************ 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70743 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70743 /var/tmp/spdk.sock 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70743 ']' 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.914 13:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.175 [2024-11-20 13:20:28.609008] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:47.175 [2024-11-20 13:20:28.609158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70743 ] 00:06:47.175 [2024-11-20 13:20:28.766167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.175 [2024-11-20 13:20:28.792986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.745 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.745 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.745 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70759 00:06:47.745 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70759 /var/tmp/spdk2.sock 00:06:47.745 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.745 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70759 /var/tmp/spdk2.sock 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70759 /var/tmp/spdk2.sock 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70759 ']' 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.006 13:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 [2024-11-20 13:20:29.508044] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:48.006 [2024-11-20 13:20:29.508207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70759 ] 00:06:48.006 [2024-11-20 13:20:29.660179] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70743 has claimed it. 00:06:48.006 [2024-11-20 13:20:29.660271] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.574 ERROR: process (pid: 70759) is no longer running 00:06:48.574 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70759) - No such process 00:06:48.574 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.574 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:48.574 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:48.574 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.574 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.574 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.574 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70743 00:06:48.574 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70743 00:06:48.574 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.160 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70743 00:06:49.160 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70743 ']' 00:06:49.160 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70743 00:06:49.160 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.160 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.160 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70743 00:06:49.160 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.160 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.160 killing process with pid 70743 00:06:49.160 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70743' 00:06:49.160 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70743 00:06:49.160 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70743 00:06:49.420 00:06:49.420 real 0m2.414s 00:06:49.420 user 0m2.574s 00:06:49.420 sys 0m0.718s 00:06:49.420 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.421 13:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.421 ************************************ 00:06:49.421 END TEST locking_app_on_locked_coremask 00:06:49.421 ************************************ 00:06:49.421 13:20:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:49.421 13:20:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.421 13:20:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.421 13:20:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.421 ************************************ 00:06:49.421 START TEST locking_overlapped_coremask 00:06:49.421 ************************************ 00:06:49.421 13:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:49.421 13:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70801 00:06:49.421 13:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:49.421 13:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70801 /var/tmp/spdk.sock 00:06:49.421 13:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70801 ']' 00:06:49.421 13:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.421 13:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.421 13:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.421 13:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.421 13:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.681 [2024-11-20 13:20:31.087961] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:49.681 [2024-11-20 13:20:31.088109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70801 ] 00:06:49.681 [2024-11-20 13:20:31.245986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.681 [2024-11-20 13:20:31.273507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.681 [2024-11-20 13:20:31.273595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.681 [2024-11-20 13:20:31.273708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70819 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70819 /var/tmp/spdk2.sock 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70819 /var/tmp/spdk2.sock 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70819 /var/tmp/spdk2.sock 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70819 ']' 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.251 13:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.510 [2024-11-20 13:20:31.983181] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:50.510 [2024-11-20 13:20:31.983352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70819 ] 00:06:50.510 [2024-11-20 13:20:32.136325] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70801 has claimed it. 00:06:50.510 [2024-11-20 13:20:32.136397] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.079 ERROR: process (pid: 70819) is no longer running 00:06:51.079 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70819) - No such process 00:06:51.079 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.079 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:51.079 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:51.079 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.079 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.079 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70801 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 70801 ']' 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 70801 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70801 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.080 killing process with pid 70801 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70801' 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 70801 00:06:51.080 13:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 70801 00:06:51.649 00:06:51.649 real 0m2.036s 00:06:51.649 user 0m5.460s 00:06:51.649 sys 0m0.511s 00:06:51.649 13:20:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.650 13:20:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.650 ************************************ 00:06:51.650 END TEST locking_overlapped_coremask 00:06:51.650 ************************************ 00:06:51.650 13:20:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:51.650 13:20:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.650 13:20:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.650 13:20:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.650 ************************************ 00:06:51.650 START TEST locking_overlapped_coremask_via_rpc 00:06:51.650 ************************************ 00:06:51.650 13:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:51.650 13:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70861 00:06:51.650 13:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:51.650 13:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 70861 /var/tmp/spdk.sock 00:06:51.650 13:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70861 ']' 00:06:51.650 13:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.650 13:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.650 13:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.650 13:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.650 13:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.650 [2024-11-20 13:20:33.191298] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:51.650 [2024-11-20 13:20:33.191444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70861 ] 00:06:51.910 [2024-11-20 13:20:33.344816] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.910 [2024-11-20 13:20:33.344874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.910 [2024-11-20 13:20:33.372472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.910 [2024-11-20 13:20:33.372576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.910 [2024-11-20 13:20:33.372718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.479 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.479 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.479 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70879 00:06:52.479 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 70879 /var/tmp/spdk2.sock 00:06:52.479 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:52.479 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70879 ']' 00:06:52.479 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.479 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.480 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.480 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.480 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.480 [2024-11-20 13:20:34.123106] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:52.480 [2024-11-20 13:20:34.123247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70879 ] 00:06:52.739 [2024-11-20 13:20:34.284263] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.739 [2024-11-20 13:20:34.284373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.739 [2024-11-20 13:20:34.349149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.739 [2024-11-20 13:20:34.349200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.739 [2024-11-20 13:20:34.349349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.309 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.309 [2024-11-20 13:20:34.972231] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70861 has claimed it. 00:06:53.569 request: 00:06:53.569 { 00:06:53.569 "method": "framework_enable_cpumask_locks", 00:06:53.569 "req_id": 1 00:06:53.569 } 00:06:53.569 Got JSON-RPC error response 00:06:53.569 response: 00:06:53.569 { 00:06:53.569 "code": -32603, 00:06:53.569 "message": "Failed to claim CPU core: 2" 00:06:53.569 } 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 70861 /var/tmp/spdk.sock 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70861 ']' 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.569 13:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.569 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.569 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.569 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 70879 /var/tmp/spdk2.sock 00:06:53.569 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70879 ']' 00:06:53.569 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.570 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.570 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.570 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.570 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.829 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.830 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.830 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.830 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.830 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.830 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.830 00:06:53.830 real 0m2.328s 00:06:53.830 user 0m1.089s 00:06:53.830 sys 0m0.162s 00:06:53.830 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.830 13:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.830 ************************************ 00:06:53.830 END TEST locking_overlapped_coremask_via_rpc 00:06:53.830 ************************************ 00:06:53.830 13:20:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.830 13:20:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70861 ]] 00:06:53.830 13:20:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70861 00:06:53.830 13:20:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70861 ']' 00:06:53.830 13:20:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70861 00:06:53.830 13:20:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:53.830 13:20:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.830 13:20:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70861 00:06:54.089 killing process with pid 70861 00:06:54.089 13:20:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.089 13:20:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.089 13:20:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70861' 00:06:54.089 13:20:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 70861 00:06:54.089 13:20:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 70861 00:06:54.349 13:20:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70879 ]] 00:06:54.349 13:20:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70879 00:06:54.349 13:20:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70879 ']' 00:06:54.349 13:20:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70879 00:06:54.349 13:20:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:54.349 13:20:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.349 13:20:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70879 00:06:54.349 killing process with pid 70879 00:06:54.349 13:20:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:54.349 13:20:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:54.349 13:20:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70879' 00:06:54.349 13:20:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 70879 00:06:54.349 13:20:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 70879 00:06:54.920 13:20:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.920 13:20:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:54.920 13:20:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70861 ]] 00:06:54.920 13:20:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70861 00:06:54.920 13:20:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70861 ']' 00:06:54.920 13:20:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70861 00:06:54.920 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (70861) - No such process 00:06:54.920 13:20:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 70861 is not found' 00:06:54.920 Process with pid 70861 is not found 00:06:54.920 13:20:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70879 ]] 00:06:54.920 13:20:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70879 00:06:54.920 13:20:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70879 ']' 00:06:54.920 13:20:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70879 00:06:54.920 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (70879) - No such process 00:06:54.920 Process with pid 70879 is not found 00:06:54.920 13:20:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 70879 is not found' 00:06:54.920 13:20:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.920 00:06:54.920 real 0m17.987s 00:06:54.920 user 0m30.601s 00:06:54.920 sys 0m5.620s 00:06:54.920 13:20:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.920 13:20:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.920 ************************************ 00:06:54.920 END TEST cpu_locks 00:06:54.920 ************************************ 00:06:54.920 00:06:54.920 real 0m45.890s 00:06:54.920 user 1m28.773s 00:06:54.920 sys 0m9.348s 00:06:54.920 13:20:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.920 13:20:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.920 ************************************ 00:06:54.920 END TEST event 00:06:54.920 ************************************ 00:06:54.920 13:20:36 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:54.920 13:20:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.920 13:20:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.920 13:20:36 -- common/autotest_common.sh@10 -- # set +x 00:06:54.920 ************************************ 00:06:54.920 START TEST thread 00:06:54.920 ************************************ 00:06:54.920 13:20:36 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:54.920 * Looking for test storage... 00:06:54.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:54.920 13:20:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.920 13:20:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.920 13:20:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.181 13:20:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.181 13:20:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.181 13:20:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.181 13:20:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.181 13:20:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.181 13:20:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.181 13:20:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.181 13:20:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.181 13:20:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.181 13:20:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.181 13:20:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.181 13:20:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.181 13:20:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:55.181 13:20:36 thread -- scripts/common.sh@345 -- # : 1 00:06:55.181 13:20:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.181 13:20:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.181 13:20:36 thread -- scripts/common.sh@365 -- # decimal 1 00:06:55.181 13:20:36 thread -- scripts/common.sh@353 -- # local d=1 00:06:55.181 13:20:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.181 13:20:36 thread -- scripts/common.sh@355 -- # echo 1 00:06:55.181 13:20:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.181 13:20:36 thread -- scripts/common.sh@366 -- # decimal 2 00:06:55.181 13:20:36 thread -- scripts/common.sh@353 -- # local d=2 00:06:55.181 13:20:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.181 13:20:36 thread -- scripts/common.sh@355 -- # echo 2 00:06:55.181 13:20:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.181 13:20:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.181 13:20:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.181 13:20:36 thread -- scripts/common.sh@368 -- # return 0 00:06:55.181 13:20:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.181 13:20:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.181 --rc genhtml_branch_coverage=1 00:06:55.181 --rc genhtml_function_coverage=1 00:06:55.181 --rc genhtml_legend=1 00:06:55.181 --rc geninfo_all_blocks=1 00:06:55.181 --rc geninfo_unexecuted_blocks=1 00:06:55.181 00:06:55.181 ' 00:06:55.181 13:20:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.181 --rc genhtml_branch_coverage=1 00:06:55.181 --rc genhtml_function_coverage=1 00:06:55.181 --rc genhtml_legend=1 00:06:55.181 --rc geninfo_all_blocks=1 00:06:55.181 --rc geninfo_unexecuted_blocks=1 00:06:55.181 00:06:55.181 ' 00:06:55.181 13:20:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.181 --rc genhtml_branch_coverage=1 00:06:55.181 --rc genhtml_function_coverage=1 00:06:55.181 --rc genhtml_legend=1 00:06:55.181 --rc geninfo_all_blocks=1 00:06:55.181 --rc geninfo_unexecuted_blocks=1 00:06:55.181 00:06:55.181 ' 00:06:55.181 13:20:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.181 --rc genhtml_branch_coverage=1 00:06:55.181 --rc genhtml_function_coverage=1 00:06:55.181 --rc genhtml_legend=1 00:06:55.181 --rc geninfo_all_blocks=1 00:06:55.181 --rc geninfo_unexecuted_blocks=1 00:06:55.181 00:06:55.181 ' 00:06:55.181 13:20:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.181 13:20:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:55.181 13:20:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.181 13:20:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.181 ************************************ 00:06:55.181 START TEST thread_poller_perf 00:06:55.181 ************************************ 00:06:55.181 13:20:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.181 [2024-11-20 13:20:36.680700] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:55.181 [2024-11-20 13:20:36.680834] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71017 ] 00:06:55.181 [2024-11-20 13:20:36.831975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.498 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.498 [2024-11-20 13:20:36.857251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.438 [2024-11-20T13:20:38.106Z] ====================================== 00:06:56.438 [2024-11-20T13:20:38.106Z] busy:2300321384 (cyc) 00:06:56.438 [2024-11-20T13:20:38.106Z] total_run_count: 411000 00:06:56.438 [2024-11-20T13:20:38.106Z] tsc_hz: 2290000000 (cyc) 00:06:56.438 [2024-11-20T13:20:38.106Z] ====================================== 00:06:56.438 [2024-11-20T13:20:38.106Z] poller_cost: 5596 (cyc), 2443 (nsec) 00:06:56.438 00:06:56.438 real 0m1.281s 00:06:56.438 user 0m1.103s 00:06:56.438 sys 0m0.074s 00:06:56.438 13:20:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.438 13:20:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.438 ************************************ 00:06:56.438 END TEST thread_poller_perf 00:06:56.438 ************************************ 00:06:56.438 13:20:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.438 13:20:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:56.438 13:20:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.438 13:20:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.438 ************************************ 00:06:56.438 START TEST thread_poller_perf 00:06:56.438 ************************************ 00:06:56.438 13:20:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.438 [2024-11-20 13:20:38.018157] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:56.438 [2024-11-20 13:20:38.018321] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71048 ] 00:06:56.698 [2024-11-20 13:20:38.175204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.698 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.698 [2024-11-20 13:20:38.200736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.638 [2024-11-20T13:20:39.306Z] ====================================== 00:06:57.638 [2024-11-20T13:20:39.306Z] busy:2293182604 (cyc) 00:06:57.638 [2024-11-20T13:20:39.306Z] total_run_count: 5340000 00:06:57.638 [2024-11-20T13:20:39.306Z] tsc_hz: 2290000000 (cyc) 00:06:57.638 [2024-11-20T13:20:39.306Z] ====================================== 00:06:57.638 [2024-11-20T13:20:39.306Z] poller_cost: 429 (cyc), 187 (nsec) 00:06:57.638 00:06:57.638 real 0m1.283s 00:06:57.638 user 0m1.108s 00:06:57.638 sys 0m0.070s 00:06:57.638 13:20:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.638 13:20:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.638 ************************************ 00:06:57.638 END TEST thread_poller_perf 00:06:57.638 ************************************ 00:06:57.898 13:20:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:57.898 00:06:57.898 real 0m2.889s 00:06:57.898 user 0m2.357s 00:06:57.898 sys 0m0.339s 00:06:57.898 13:20:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.898 13:20:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.898 ************************************ 00:06:57.898 END TEST thread 00:06:57.898 ************************************ 00:06:57.898 13:20:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:57.898 13:20:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.898 13:20:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.898 13:20:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.898 13:20:39 -- common/autotest_common.sh@10 -- # set +x 00:06:57.898 ************************************ 00:06:57.898 START TEST app_cmdline 00:06:57.898 ************************************ 00:06:57.898 13:20:39 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.898 * Looking for test storage... 00:06:57.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.898 13:20:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.898 13:20:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.898 13:20:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.157 13:20:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.157 13:20:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.158 13:20:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:58.158 13:20:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.158 13:20:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.158 --rc genhtml_branch_coverage=1 00:06:58.158 --rc genhtml_function_coverage=1 00:06:58.158 --rc genhtml_legend=1 00:06:58.158 --rc geninfo_all_blocks=1 00:06:58.158 --rc geninfo_unexecuted_blocks=1 00:06:58.158 00:06:58.158 ' 00:06:58.158 13:20:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.158 --rc genhtml_branch_coverage=1 00:06:58.158 --rc genhtml_function_coverage=1 00:06:58.158 --rc genhtml_legend=1 00:06:58.158 --rc geninfo_all_blocks=1 00:06:58.158 --rc geninfo_unexecuted_blocks=1 00:06:58.158 00:06:58.158 ' 00:06:58.158 13:20:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.158 --rc genhtml_branch_coverage=1 00:06:58.158 --rc genhtml_function_coverage=1 00:06:58.158 --rc genhtml_legend=1 00:06:58.158 --rc geninfo_all_blocks=1 00:06:58.158 --rc geninfo_unexecuted_blocks=1 00:06:58.158 00:06:58.158 ' 00:06:58.158 13:20:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.158 --rc genhtml_branch_coverage=1 00:06:58.158 --rc genhtml_function_coverage=1 00:06:58.158 --rc genhtml_legend=1 00:06:58.158 --rc geninfo_all_blocks=1 00:06:58.158 --rc geninfo_unexecuted_blocks=1 00:06:58.158 00:06:58.158 ' 00:06:58.158 13:20:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:58.158 13:20:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71137 00:06:58.158 13:20:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:58.158 13:20:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71137 00:06:58.158 13:20:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71137 ']' 00:06:58.158 13:20:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.158 13:20:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.158 13:20:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.158 13:20:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.158 13:20:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.158 [2024-11-20 13:20:39.685712] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:58.158 [2024-11-20 13:20:39.685845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71137 ] 00:06:58.418 [2024-11-20 13:20:39.838607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.418 [2024-11-20 13:20:39.864109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.987 13:20:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.987 13:20:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:58.987 13:20:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:59.246 { 00:06:59.246 "version": "SPDK v25.01-pre git sha1 557f022f6", 00:06:59.246 "fields": { 00:06:59.246 "major": 25, 00:06:59.246 "minor": 1, 00:06:59.246 "patch": 0, 00:06:59.246 "suffix": "-pre", 00:06:59.246 "commit": "557f022f6" 00:06:59.246 } 00:06:59.246 } 00:06:59.246 13:20:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:59.246 13:20:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:59.246 13:20:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:59.246 13:20:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:59.246 13:20:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.246 13:20:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:59.246 13:20:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.246 13:20:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:59.246 13:20:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:59.246 13:20:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:59.246 13:20:40 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.505 request: 00:06:59.505 { 00:06:59.505 "method": "env_dpdk_get_mem_stats", 00:06:59.505 "req_id": 1 00:06:59.505 } 00:06:59.505 Got JSON-RPC error response 00:06:59.505 response: 00:06:59.505 { 00:06:59.505 "code": -32601, 00:06:59.505 "message": "Method not found" 00:06:59.505 } 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.505 13:20:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71137 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71137 ']' 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71137 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71137 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.505 killing process with pid 71137 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71137' 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 71137 00:06:59.505 13:20:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 71137 00:06:59.765 00:06:59.765 real 0m1.963s 00:06:59.765 user 0m2.202s 00:06:59.765 sys 0m0.537s 00:06:59.765 13:20:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.765 13:20:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.765 ************************************ 00:06:59.765 END TEST app_cmdline 00:06:59.765 ************************************ 00:06:59.765 13:20:41 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.765 13:20:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.765 13:20:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.765 13:20:41 -- common/autotest_common.sh@10 -- # set +x 00:06:59.765 ************************************ 00:06:59.765 START TEST version 00:06:59.765 ************************************ 00:06:59.765 13:20:41 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.025 * Looking for test storage... 00:07:00.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:00.025 13:20:41 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.025 13:20:41 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.025 13:20:41 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.025 13:20:41 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.025 13:20:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.025 13:20:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.025 13:20:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.025 13:20:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.025 13:20:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.025 13:20:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.025 13:20:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.025 13:20:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.025 13:20:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.025 13:20:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.025 13:20:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.025 13:20:41 version -- scripts/common.sh@344 -- # case "$op" in 00:07:00.025 13:20:41 version -- scripts/common.sh@345 -- # : 1 00:07:00.025 13:20:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.025 13:20:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.025 13:20:41 version -- scripts/common.sh@365 -- # decimal 1 00:07:00.025 13:20:41 version -- scripts/common.sh@353 -- # local d=1 00:07:00.025 13:20:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.025 13:20:41 version -- scripts/common.sh@355 -- # echo 1 00:07:00.025 13:20:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.025 13:20:41 version -- scripts/common.sh@366 -- # decimal 2 00:07:00.025 13:20:41 version -- scripts/common.sh@353 -- # local d=2 00:07:00.025 13:20:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.025 13:20:41 version -- scripts/common.sh@355 -- # echo 2 00:07:00.025 13:20:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.025 13:20:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.025 13:20:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.025 13:20:41 version -- scripts/common.sh@368 -- # return 0 00:07:00.025 13:20:41 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.025 13:20:41 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.025 --rc genhtml_branch_coverage=1 00:07:00.025 --rc genhtml_function_coverage=1 00:07:00.025 --rc genhtml_legend=1 00:07:00.025 --rc geninfo_all_blocks=1 00:07:00.025 --rc geninfo_unexecuted_blocks=1 00:07:00.025 00:07:00.025 ' 00:07:00.025 13:20:41 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.025 --rc genhtml_branch_coverage=1 00:07:00.025 --rc genhtml_function_coverage=1 00:07:00.025 --rc genhtml_legend=1 00:07:00.025 --rc geninfo_all_blocks=1 00:07:00.025 --rc geninfo_unexecuted_blocks=1 00:07:00.025 00:07:00.025 ' 00:07:00.025 13:20:41 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.025 --rc genhtml_branch_coverage=1 00:07:00.025 --rc genhtml_function_coverage=1 00:07:00.025 --rc genhtml_legend=1 00:07:00.025 --rc geninfo_all_blocks=1 00:07:00.025 --rc geninfo_unexecuted_blocks=1 00:07:00.025 00:07:00.025 ' 00:07:00.025 13:20:41 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.025 --rc genhtml_branch_coverage=1 00:07:00.025 --rc genhtml_function_coverage=1 00:07:00.025 --rc genhtml_legend=1 00:07:00.025 --rc geninfo_all_blocks=1 00:07:00.025 --rc geninfo_unexecuted_blocks=1 00:07:00.025 00:07:00.025 ' 00:07:00.025 13:20:41 version -- app/version.sh@17 -- # get_header_version major 00:07:00.025 13:20:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.025 13:20:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.025 13:20:41 version -- app/version.sh@14 -- # cut -f2 00:07:00.025 13:20:41 version -- app/version.sh@17 -- # major=25 00:07:00.025 13:20:41 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.025 13:20:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.025 13:20:41 version -- app/version.sh@14 -- # cut -f2 00:07:00.025 13:20:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.026 13:20:41 version -- app/version.sh@18 -- # minor=1 00:07:00.026 13:20:41 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.026 13:20:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.026 13:20:41 version -- app/version.sh@14 -- # cut -f2 00:07:00.026 13:20:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.026 13:20:41 version -- app/version.sh@19 -- # patch=0 00:07:00.026 13:20:41 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.026 13:20:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.026 13:20:41 version -- app/version.sh@14 -- # cut -f2 00:07:00.026 13:20:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.026 13:20:41 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.026 13:20:41 version -- app/version.sh@22 -- # version=25.1 00:07:00.026 13:20:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.026 13:20:41 version -- app/version.sh@28 -- # version=25.1rc0 00:07:00.026 13:20:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:00.026 13:20:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:00.026 13:20:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:00.026 13:20:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:00.026 00:07:00.026 real 0m0.282s 00:07:00.026 user 0m0.168s 00:07:00.026 sys 0m0.156s 00:07:00.026 13:20:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.026 13:20:41 version -- common/autotest_common.sh@10 -- # set +x 00:07:00.026 ************************************ 00:07:00.026 END TEST version 00:07:00.026 ************************************ 00:07:00.285 13:20:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:00.285 13:20:41 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:00.285 13:20:41 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:00.285 13:20:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.285 13:20:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.285 13:20:41 -- common/autotest_common.sh@10 -- # set +x 00:07:00.285 ************************************ 00:07:00.285 START TEST bdev_raid 00:07:00.285 ************************************ 00:07:00.285 13:20:41 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:00.285 * Looking for test storage... 00:07:00.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:00.286 13:20:41 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.286 13:20:41 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.286 13:20:41 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.286 13:20:41 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.286 13:20:41 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:00.286 13:20:41 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.286 13:20:41 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.286 --rc genhtml_branch_coverage=1 00:07:00.286 --rc genhtml_function_coverage=1 00:07:00.286 --rc genhtml_legend=1 00:07:00.286 --rc geninfo_all_blocks=1 00:07:00.286 --rc geninfo_unexecuted_blocks=1 00:07:00.286 00:07:00.286 ' 00:07:00.286 13:20:41 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.286 --rc genhtml_branch_coverage=1 00:07:00.286 --rc genhtml_function_coverage=1 00:07:00.286 --rc genhtml_legend=1 00:07:00.286 --rc geninfo_all_blocks=1 00:07:00.286 --rc geninfo_unexecuted_blocks=1 00:07:00.286 00:07:00.286 ' 00:07:00.286 13:20:41 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.286 --rc genhtml_branch_coverage=1 00:07:00.286 --rc genhtml_function_coverage=1 00:07:00.286 --rc genhtml_legend=1 00:07:00.286 --rc geninfo_all_blocks=1 00:07:00.286 --rc geninfo_unexecuted_blocks=1 00:07:00.286 00:07:00.286 ' 00:07:00.286 13:20:41 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.286 --rc genhtml_branch_coverage=1 00:07:00.286 --rc genhtml_function_coverage=1 00:07:00.286 --rc genhtml_legend=1 00:07:00.286 --rc geninfo_all_blocks=1 00:07:00.286 --rc geninfo_unexecuted_blocks=1 00:07:00.286 00:07:00.286 ' 00:07:00.286 13:20:41 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.286 13:20:41 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.286 13:20:41 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:00.286 13:20:41 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:00.546 13:20:41 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:00.546 13:20:41 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:00.546 13:20:41 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:00.546 13:20:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.546 13:20:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.546 13:20:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.546 ************************************ 00:07:00.546 START TEST raid1_resize_data_offset_test 00:07:00.546 ************************************ 00:07:00.546 13:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:00.546 13:20:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71297 00:07:00.546 13:20:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.546 13:20:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71297' 00:07:00.546 Process raid pid: 71297 00:07:00.546 13:20:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71297 00:07:00.546 13:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 71297 ']' 00:07:00.546 13:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.546 13:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.546 13:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.546 13:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.546 13:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.546 [2024-11-20 13:20:42.058068] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:00.546 [2024-11-20 13:20:42.058874] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.806 [2024-11-20 13:20:42.219109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.806 [2024-11-20 13:20:42.247285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.806 [2024-11-20 13:20:42.290432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.806 [2024-11-20 13:20:42.290562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.376 malloc0 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.376 malloc1 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.376 null0 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.376 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.376 [2024-11-20 13:20:42.956504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:01.376 [2024-11-20 13:20:42.958306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:01.376 [2024-11-20 13:20:42.958349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:01.377 [2024-11-20 13:20:42.958474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:01.377 [2024-11-20 13:20:42.958486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:01.377 [2024-11-20 13:20:42.958742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:01.377 [2024-11-20 13:20:42.958859] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:01.377 [2024-11-20 13:20:42.958875] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:01.377 [2024-11-20 13:20:42.959000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.377 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.377 13:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.377 13:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:01.377 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.377 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.377 13:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.377 13:20:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:01.377 13:20:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:01.377 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.377 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.377 [2024-11-20 13:20:43.012380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:01.377 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.377 13:20:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:01.377 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.377 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.636 malloc2 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.636 [2024-11-20 13:20:43.144299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:01.636 [2024-11-20 13:20:43.151070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.636 [2024-11-20 13:20:43.154097] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71297 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 71297 ']' 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 71297 00:07:01.636 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:01.637 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.637 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71297 00:07:01.637 killing process with pid 71297 00:07:01.637 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.637 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.637 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71297' 00:07:01.637 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 71297 00:07:01.637 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 71297 00:07:01.637 [2024-11-20 13:20:43.234032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.637 [2024-11-20 13:20:43.235792] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:01.637 [2024-11-20 13:20:43.235856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.637 [2024-11-20 13:20:43.235874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:01.637 [2024-11-20 13:20:43.242176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.637 [2024-11-20 13:20:43.242462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.637 [2024-11-20 13:20:43.242477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:01.896 [2024-11-20 13:20:43.455550] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.157 ************************************ 00:07:02.157 END TEST raid1_resize_data_offset_test 00:07:02.157 ************************************ 00:07:02.157 13:20:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:02.157 00:07:02.157 real 0m1.687s 00:07:02.157 user 0m1.685s 00:07:02.157 sys 0m0.435s 00:07:02.157 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.157 13:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.157 13:20:43 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:02.157 13:20:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:02.157 13:20:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.157 13:20:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.157 ************************************ 00:07:02.157 START TEST raid0_resize_superblock_test 00:07:02.157 ************************************ 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:02.157 Process raid pid: 71352 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71352 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71352' 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71352 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71352 ']' 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.157 13:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.157 [2024-11-20 13:20:43.787945] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:02.157 [2024-11-20 13:20:43.788155] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.417 [2024-11-20 13:20:43.942595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.417 [2024-11-20 13:20:43.967590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.417 [2024-11-20 13:20:44.009486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.417 [2024-11-20 13:20:44.009598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.996 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.996 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:02.996 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:02.996 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.996 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.268 malloc0 00:07:03.268 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.268 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:03.268 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.268 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.268 [2024-11-20 13:20:44.744464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:03.268 [2024-11-20 13:20:44.744582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.268 [2024-11-20 13:20:44.744634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:03.268 [2024-11-20 13:20:44.744645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.268 [2024-11-20 13:20:44.746728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.268 [2024-11-20 13:20:44.746768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:03.268 pt0 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.269 d7cff33a-b78e-4ab1-9c11-8b312e13b446 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.269 754e607e-f038-4592-8b1f-9a0636c16afa 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.269 16760990-eb72-4237-b4f3-22d59ecf72da 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.269 [2024-11-20 13:20:44.880887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 754e607e-f038-4592-8b1f-9a0636c16afa is claimed 00:07:03.269 [2024-11-20 13:20:44.880963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 16760990-eb72-4237-b4f3-22d59ecf72da is claimed 00:07:03.269 [2024-11-20 13:20:44.881100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:03.269 [2024-11-20 13:20:44.881125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:03.269 [2024-11-20 13:20:44.881373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:03.269 [2024-11-20 13:20:44.881544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:03.269 [2024-11-20 13:20:44.881555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:03.269 [2024-11-20 13:20:44.881703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.269 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:03.530 [2024-11-20 13:20:44.968961] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.530 13:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.530 [2024-11-20 13:20:45.016852] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.530 [2024-11-20 13:20:45.016926] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '754e607e-f038-4592-8b1f-9a0636c16afa' was resized: old size 131072, new size 204800 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.530 [2024-11-20 13:20:45.028730] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.530 [2024-11-20 13:20:45.028753] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '16760990-eb72-4237-b4f3-22d59ecf72da' was resized: old size 131072, new size 204800 00:07:03.530 [2024-11-20 13:20:45.028779] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:03.530 [2024-11-20 13:20:45.140634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.530 [2024-11-20 13:20:45.168381] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:03.530 [2024-11-20 13:20:45.168498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:03.530 [2024-11-20 13:20:45.168522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.530 [2024-11-20 13:20:45.168533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:03.530 [2024-11-20 13:20:45.168663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.530 [2024-11-20 13:20:45.168702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.530 [2024-11-20 13:20:45.168715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.530 [2024-11-20 13:20:45.180321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:03.530 [2024-11-20 13:20:45.180376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.530 [2024-11-20 13:20:45.180396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:03.530 [2024-11-20 13:20:45.180405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.530 [2024-11-20 13:20:45.182562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.530 [2024-11-20 13:20:45.182638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:03.530 [2024-11-20 13:20:45.184112] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 754e607e-f038-4592-8b1f-9a0636c16afa 00:07:03.530 [2024-11-20 13:20:45.184164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 754e607e-f038-4592-8b1f-9a0636c16afa is claimed 00:07:03.530 [2024-11-20 13:20:45.184241] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 16760990-eb72-4237-b4f3-22d59ecf72da 00:07:03.530 [2024-11-20 13:20:45.184264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 16760990-eb72-4237-b4f3-22d59ecf72da is claimed 00:07:03.530 [2024-11-20 13:20:45.184347] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 16760990-eb72-4237-b4f3-22d59ecf72da (2) smaller than existing raid bdev Raid (3) 00:07:03.530 [2024-11-20 13:20:45.184365] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 754e607e-f038-4592-8b1f-9a0636c16afa: File exists 00:07:03.530 [2024-11-20 13:20:45.184417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:07:03.530 [2024-11-20 13:20:45.184425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:03.530 [2024-11-20 13:20:45.184639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:07:03.530 [2024-11-20 13:20:45.184786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:07:03.530 [2024-11-20 13:20:45.184802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:07:03.530 [2024-11-20 13:20:45.184952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.530 pt0 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.530 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.790 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.790 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.790 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:03.790 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.790 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.790 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.790 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.790 [2024-11-20 13:20:45.204588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71352 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71352 ']' 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71352 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71352 00:07:03.791 killing process with pid 71352 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71352' 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 71352 00:07:03.791 [2024-11-20 13:20:45.285847] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.791 [2024-11-20 13:20:45.285911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.791 [2024-11-20 13:20:45.285952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.791 [2024-11-20 13:20:45.285960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:07:03.791 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 71352 00:07:03.791 [2024-11-20 13:20:45.443406] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.051 13:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:04.051 00:07:04.051 real 0m1.931s 00:07:04.051 user 0m2.201s 00:07:04.051 sys 0m0.455s 00:07:04.051 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.051 13:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.051 ************************************ 00:07:04.051 END TEST raid0_resize_superblock_test 00:07:04.051 ************************************ 00:07:04.051 13:20:45 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:04.051 13:20:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.051 13:20:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.051 13:20:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.311 ************************************ 00:07:04.311 START TEST raid1_resize_superblock_test 00:07:04.311 ************************************ 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71421 00:07:04.311 Process raid pid: 71421 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71421' 00:07:04.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71421 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71421 ']' 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.311 13:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.311 [2024-11-20 13:20:45.804402] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:04.311 [2024-11-20 13:20:45.804517] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.311 [2024-11-20 13:20:45.960068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.570 [2024-11-20 13:20:45.984394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.570 [2024-11-20 13:20:46.026208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.570 [2024-11-20 13:20:46.026242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.140 malloc0 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.140 [2024-11-20 13:20:46.771538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:05.140 [2024-11-20 13:20:46.771601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.140 [2024-11-20 13:20:46.771631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:05.140 [2024-11-20 13:20:46.771648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.140 [2024-11-20 13:20:46.774023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.140 [2024-11-20 13:20:46.774061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:05.140 pt0 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.140 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.400 0302ad0d-b283-445f-ad53-36983fd46e62 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.400 11473b0c-a633-4758-b2fe-f52cab09ec28 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.400 0163cde2-ba33-48bc-adc1-a55004d0be09 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.400 [2024-11-20 13:20:46.908955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 11473b0c-a633-4758-b2fe-f52cab09ec28 is claimed 00:07:05.400 [2024-11-20 13:20:46.909045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0163cde2-ba33-48bc-adc1-a55004d0be09 is claimed 00:07:05.400 [2024-11-20 13:20:46.909151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:05.400 [2024-11-20 13:20:46.909164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:05.400 [2024-11-20 13:20:46.909460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:05.400 [2024-11-20 13:20:46.909615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:05.400 [2024-11-20 13:20:46.909625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:05.400 [2024-11-20 13:20:46.909753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:05.400 13:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.400 [2024-11-20 13:20:47.020971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.400 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.661 [2024-11-20 13:20:47.068826] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.661 [2024-11-20 13:20:47.068907] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '11473b0c-a633-4758-b2fe-f52cab09ec28' was resized: old size 131072, new size 204800 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.661 [2024-11-20 13:20:47.080746] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.661 [2024-11-20 13:20:47.080768] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0163cde2-ba33-48bc-adc1-a55004d0be09' was resized: old size 131072, new size 204800 00:07:05.661 [2024-11-20 13:20:47.080795] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.661 [2024-11-20 13:20:47.184662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.661 [2024-11-20 13:20:47.228409] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:05.661 [2024-11-20 13:20:47.228472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:05.661 [2024-11-20 13:20:47.228504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:05.661 [2024-11-20 13:20:47.228663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:05.661 [2024-11-20 13:20:47.228798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.661 [2024-11-20 13:20:47.228849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.661 [2024-11-20 13:20:47.228861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.661 [2024-11-20 13:20:47.240324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:05.661 [2024-11-20 13:20:47.240371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.661 [2024-11-20 13:20:47.240387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:05.661 [2024-11-20 13:20:47.240397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.661 [2024-11-20 13:20:47.242468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.661 [2024-11-20 13:20:47.242570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:05.661 [2024-11-20 13:20:47.243910] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 11473b0c-a633-4758-b2fe-f52cab09ec28 00:07:05.661 [2024-11-20 13:20:47.243959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 11473b0c-a633-4758-b2fe-f52cab09ec28 is claimed 00:07:05.661 [2024-11-20 13:20:47.244038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0163cde2-ba33-48bc-adc1-a55004d0be09 00:07:05.661 [2024-11-20 13:20:47.244059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0163cde2-ba33-48bc-adc1-a55004d0be09 is claimed 00:07:05.661 [2024-11-20 13:20:47.244136] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 0163cde2-ba33-48bc-adc1-a55004d0be09 (2) smaller than existing raid bdev Raid (3) 00:07:05.661 [2024-11-20 13:20:47.244154] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 11473b0c-a633-4758-b2fe-f52cab09ec28: File exists 00:07:05.661 [2024-11-20 13:20:47.244203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:07:05.661 [2024-11-20 13:20:47.244211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:05.661 [2024-11-20 13:20:47.244432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:07:05.661 [2024-11-20 13:20:47.244584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:07:05.661 [2024-11-20 13:20:47.244599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:07:05.661 [2024-11-20 13:20:47.244742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.661 pt0 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.661 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:05.662 [2024-11-20 13:20:47.264893] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.662 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.662 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.662 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:05.662 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:05.662 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71421 00:07:05.662 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71421 ']' 00:07:05.662 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71421 00:07:05.662 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:05.662 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.662 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71421 00:07:05.921 killing process with pid 71421 00:07:05.921 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.921 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.921 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71421' 00:07:05.921 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 71421 00:07:05.921 [2024-11-20 13:20:47.350571] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.921 [2024-11-20 13:20:47.350625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.921 [2024-11-20 13:20:47.350674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.921 [2024-11-20 13:20:47.350684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:07:05.921 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 71421 00:07:05.921 [2024-11-20 13:20:47.508864] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.181 13:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:06.181 00:07:06.181 real 0m1.994s 00:07:06.181 user 0m2.330s 00:07:06.181 sys 0m0.458s 00:07:06.181 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.181 13:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.181 ************************************ 00:07:06.182 END TEST raid1_resize_superblock_test 00:07:06.182 ************************************ 00:07:06.182 13:20:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:06.182 13:20:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:06.182 13:20:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:06.182 13:20:47 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:06.182 13:20:47 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:06.182 13:20:47 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:06.182 13:20:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.182 13:20:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.182 13:20:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.182 ************************************ 00:07:06.182 START TEST raid_function_test_raid0 00:07:06.182 ************************************ 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71497 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71497' 00:07:06.182 Process raid pid: 71497 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71497 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 71497 ']' 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.182 13:20:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:06.441 [2024-11-20 13:20:47.892877] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:06.441 [2024-11-20 13:20:47.893112] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.441 [2024-11-20 13:20:48.047943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.441 [2024-11-20 13:20:48.073288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.700 [2024-11-20 13:20:48.115996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.700 [2024-11-20 13:20:48.116053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.270 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.271 Base_1 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.271 Base_2 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.271 [2024-11-20 13:20:48.772047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:07.271 [2024-11-20 13:20:48.773870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:07.271 [2024-11-20 13:20:48.774014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:07.271 [2024-11-20 13:20:48.774031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:07.271 [2024-11-20 13:20:48.774314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:07.271 [2024-11-20 13:20:48.774426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:07.271 [2024-11-20 13:20:48.774436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:07:07.271 [2024-11-20 13:20:48.774557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:07.271 13:20:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:07.531 [2024-11-20 13:20:49.019643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:07.531 /dev/nbd0 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.531 1+0 records in 00:07:07.531 1+0 records out 00:07:07.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00158068 s, 2.6 MB/s 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:07.531 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.791 { 00:07:07.791 "nbd_device": "/dev/nbd0", 00:07:07.791 "bdev_name": "raid" 00:07:07.791 } 00:07:07.791 ]' 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.791 { 00:07:07.791 "nbd_device": "/dev/nbd0", 00:07:07.791 "bdev_name": "raid" 00:07:07.791 } 00:07:07.791 ]' 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:07.791 4096+0 records in 00:07:07.791 4096+0 records out 00:07:07.791 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0370903 s, 56.5 MB/s 00:07:07.791 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:08.051 4096+0 records in 00:07:08.051 4096+0 records out 00:07:08.051 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.176244 s, 11.9 MB/s 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:08.051 128+0 records in 00:07:08.051 128+0 records out 00:07:08.051 65536 bytes (66 kB, 64 KiB) copied, 0.00116748 s, 56.1 MB/s 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:08.051 2035+0 records in 00:07:08.051 2035+0 records out 00:07:08.051 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0153114 s, 68.0 MB/s 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:08.051 456+0 records in 00:07:08.051 456+0 records out 00:07:08.051 233472 bytes (233 kB, 228 KiB) copied, 0.00373914 s, 62.4 MB/s 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.051 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:08.311 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.311 [2024-11-20 13:20:49.932873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.311 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.311 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.311 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.311 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.311 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.311 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:08.311 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.311 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:08.311 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:08.311 13:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:08.570 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71497 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 71497 ']' 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 71497 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71497 00:07:08.571 killing process with pid 71497 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71497' 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 71497 00:07:08.571 [2024-11-20 13:20:50.236670] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.571 [2024-11-20 13:20:50.236781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.571 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 71497 00:07:08.571 [2024-11-20 13:20:50.236835] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.571 [2024-11-20 13:20:50.236846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:07:08.830 [2024-11-20 13:20:50.259574] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.830 13:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:08.830 00:07:08.830 real 0m2.657s 00:07:08.830 user 0m3.341s 00:07:08.830 sys 0m0.885s 00:07:08.830 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.830 ************************************ 00:07:08.830 END TEST raid_function_test_raid0 00:07:08.830 ************************************ 00:07:08.830 13:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:09.089 13:20:50 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:09.089 13:20:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.089 13:20:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.089 13:20:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.089 ************************************ 00:07:09.089 START TEST raid_function_test_concat 00:07:09.089 ************************************ 00:07:09.089 13:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:09.089 13:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:09.089 13:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:09.089 13:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:09.089 13:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71611 00:07:09.089 Process raid pid: 71611 00:07:09.089 13:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:09.089 13:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71611' 00:07:09.089 13:20:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71611 00:07:09.089 13:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 71611 ']' 00:07:09.089 13:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.090 13:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.090 13:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.090 13:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.090 13:20:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.090 [2024-11-20 13:20:50.612349] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:09.090 [2024-11-20 13:20:50.612487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.349 [2024-11-20 13:20:50.768398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.349 [2024-11-20 13:20:50.793004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.349 [2024-11-20 13:20:50.835189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.349 [2024-11-20 13:20:50.835230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.937 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.938 Base_1 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.938 Base_2 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.938 [2024-11-20 13:20:51.450012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:09.938 [2024-11-20 13:20:51.451819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:09.938 [2024-11-20 13:20:51.451885] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:09.938 [2024-11-20 13:20:51.451903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:09.938 [2024-11-20 13:20:51.452164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:09.938 [2024-11-20 13:20:51.452288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:09.938 [2024-11-20 13:20:51.452308] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:07:09.938 [2024-11-20 13:20:51.452451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:09.938 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:10.198 [2024-11-20 13:20:51.673671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:10.198 /dev/nbd0 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.198 1+0 records in 00:07:10.198 1+0 records out 00:07:10.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041638 s, 9.8 MB/s 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:10.198 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.199 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.199 13:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:10.199 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.199 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:10.199 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:10.199 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.199 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.459 { 00:07:10.459 "nbd_device": "/dev/nbd0", 00:07:10.459 "bdev_name": "raid" 00:07:10.459 } 00:07:10.459 ]' 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.459 { 00:07:10.459 "nbd_device": "/dev/nbd0", 00:07:10.459 "bdev_name": "raid" 00:07:10.459 } 00:07:10.459 ]' 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:10.459 13:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:10.459 4096+0 records in 00:07:10.459 4096+0 records out 00:07:10.459 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0275738 s, 76.1 MB/s 00:07:10.459 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:10.719 4096+0 records in 00:07:10.719 4096+0 records out 00:07:10.719 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.180691 s, 11.6 MB/s 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:10.719 128+0 records in 00:07:10.719 128+0 records out 00:07:10.719 65536 bytes (66 kB, 64 KiB) copied, 0.000383449 s, 171 MB/s 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:10.719 2035+0 records in 00:07:10.719 2035+0 records out 00:07:10.719 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00544238 s, 191 MB/s 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:10.719 456+0 records in 00:07:10.719 456+0 records out 00:07:10.719 233472 bytes (233 kB, 228 KiB) copied, 0.00396233 s, 58.9 MB/s 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:10.719 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.720 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:10.720 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.720 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:10.720 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.720 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:10.980 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.980 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.980 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.980 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.980 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.980 [2024-11-20 13:20:52.526567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.980 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.980 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:10.980 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.980 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:10.980 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.980 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71611 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 71611 ']' 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 71611 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71611 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71611' 00:07:11.241 killing process with pid 71611 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 71611 00:07:11.241 [2024-11-20 13:20:52.831393] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.241 [2024-11-20 13:20:52.831507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.241 [2024-11-20 13:20:52.831569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.241 [2024-11-20 13:20:52.831583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:07:11.241 13:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 71611 00:07:11.241 [2024-11-20 13:20:52.854222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.501 13:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:11.501 00:07:11.501 real 0m2.527s 00:07:11.501 user 0m3.127s 00:07:11.501 sys 0m0.860s 00:07:11.501 13:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.501 13:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:11.501 ************************************ 00:07:11.501 END TEST raid_function_test_concat 00:07:11.501 ************************************ 00:07:11.502 13:20:53 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:11.502 13:20:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.502 13:20:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.502 13:20:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.502 ************************************ 00:07:11.502 START TEST raid0_resize_test 00:07:11.502 ************************************ 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71722 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.502 Process raid pid: 71722 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71722' 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71722 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 71722 ']' 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.502 13:20:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.762 [2024-11-20 13:20:53.197423] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:11.762 [2024-11-20 13:20:53.197552] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.762 [2024-11-20 13:20:53.350478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.762 [2024-11-20 13:20:53.375312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.762 [2024-11-20 13:20:53.418657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.762 [2024-11-20 13:20:53.418700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.703 Base_1 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.703 Base_2 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.703 [2024-11-20 13:20:54.060020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:12.703 [2024-11-20 13:20:54.061803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:12.703 [2024-11-20 13:20:54.061855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:12.703 [2024-11-20 13:20:54.061865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:12.703 [2024-11-20 13:20:54.062129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:12.703 [2024-11-20 13:20:54.062262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:12.703 [2024-11-20 13:20:54.062271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:12.703 [2024-11-20 13:20:54.062374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.703 [2024-11-20 13:20:54.071974] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.703 [2024-11-20 13:20:54.072010] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:12.703 true 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.703 [2024-11-20 13:20:54.088149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.703 [2024-11-20 13:20:54.131845] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.703 [2024-11-20 13:20:54.131870] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:12.703 [2024-11-20 13:20:54.131894] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:12.703 true 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.703 [2024-11-20 13:20:54.144025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71722 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 71722 ']' 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 71722 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71722 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.703 killing process with pid 71722 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71722' 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 71722 00:07:12.703 [2024-11-20 13:20:54.212820] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.703 [2024-11-20 13:20:54.212916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.703 [2024-11-20 13:20:54.212970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.703 [2024-11-20 13:20:54.212980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:12.703 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 71722 00:07:12.703 [2024-11-20 13:20:54.214547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.963 13:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:12.963 00:07:12.963 real 0m1.290s 00:07:12.963 user 0m1.445s 00:07:12.963 sys 0m0.296s 00:07:12.963 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.963 13:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.963 ************************************ 00:07:12.963 END TEST raid0_resize_test 00:07:12.963 ************************************ 00:07:12.963 13:20:54 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:12.963 13:20:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.963 13:20:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.963 13:20:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.963 ************************************ 00:07:12.963 START TEST raid1_resize_test 00:07:12.963 ************************************ 00:07:12.963 13:20:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:12.963 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:12.963 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:12.963 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:12.963 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:12.963 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:12.963 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:12.963 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:12.963 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:12.963 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71772 00:07:12.964 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.964 Process raid pid: 71772 00:07:12.964 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71772' 00:07:12.964 13:20:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71772 00:07:12.964 13:20:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 71772 ']' 00:07:12.964 13:20:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.964 13:20:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.964 13:20:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.964 13:20:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.964 13:20:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.964 [2024-11-20 13:20:54.560266] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:12.964 [2024-11-20 13:20:54.560724] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.224 [2024-11-20 13:20:54.691848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.224 [2024-11-20 13:20:54.717397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.224 [2024-11-20 13:20:54.760438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.224 [2024-11-20 13:20:54.760476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.795 Base_1 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.795 Base_2 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.795 [2024-11-20 13:20:55.413692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:13.795 [2024-11-20 13:20:55.415522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:13.795 [2024-11-20 13:20:55.415577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:13.795 [2024-11-20 13:20:55.415588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:13.795 [2024-11-20 13:20:55.415857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:13.795 [2024-11-20 13:20:55.415967] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:13.795 [2024-11-20 13:20:55.415975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:13.795 [2024-11-20 13:20:55.416091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.795 [2024-11-20 13:20:55.425672] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:13.795 [2024-11-20 13:20:55.425705] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:13.795 true 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.795 [2024-11-20 13:20:55.437824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.795 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.056 [2024-11-20 13:20:55.485541] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.056 [2024-11-20 13:20:55.485566] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:14.056 [2024-11-20 13:20:55.485591] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:14.056 true 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.056 [2024-11-20 13:20:55.497702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71772 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 71772 ']' 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 71772 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71772 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.056 killing process with pid 71772 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71772' 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 71772 00:07:14.056 [2024-11-20 13:20:55.570153] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.056 [2024-11-20 13:20:55.570230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.056 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 71772 00:07:14.056 [2024-11-20 13:20:55.570628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.056 [2024-11-20 13:20:55.570658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:14.056 [2024-11-20 13:20:55.571759] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.317 13:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:14.317 00:07:14.317 real 0m1.297s 00:07:14.317 user 0m1.485s 00:07:14.317 sys 0m0.263s 00:07:14.317 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.317 13:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.317 ************************************ 00:07:14.317 END TEST raid1_resize_test 00:07:14.317 ************************************ 00:07:14.317 13:20:55 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:14.317 13:20:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:14.317 13:20:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:14.317 13:20:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:14.317 13:20:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.317 13:20:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.317 ************************************ 00:07:14.317 START TEST raid_state_function_test 00:07:14.317 ************************************ 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71823 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71823' 00:07:14.317 Process raid pid: 71823 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71823 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71823 ']' 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.317 13:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.318 13:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.318 13:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.318 13:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.318 [2024-11-20 13:20:55.936911] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:14.318 [2024-11-20 13:20:55.937067] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.578 [2024-11-20 13:20:56.086238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.578 [2024-11-20 13:20:56.110489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.578 [2024-11-20 13:20:56.152358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.578 [2024-11-20 13:20:56.152392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.149 [2024-11-20 13:20:56.757839] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.149 [2024-11-20 13:20:56.757960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.149 [2024-11-20 13:20:56.757976] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.149 [2024-11-20 13:20:56.757985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.149 "name": "Existed_Raid", 00:07:15.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.149 "strip_size_kb": 64, 00:07:15.149 "state": "configuring", 00:07:15.149 "raid_level": "raid0", 00:07:15.149 "superblock": false, 00:07:15.149 "num_base_bdevs": 2, 00:07:15.149 "num_base_bdevs_discovered": 0, 00:07:15.149 "num_base_bdevs_operational": 2, 00:07:15.149 "base_bdevs_list": [ 00:07:15.149 { 00:07:15.149 "name": "BaseBdev1", 00:07:15.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.149 "is_configured": false, 00:07:15.149 "data_offset": 0, 00:07:15.149 "data_size": 0 00:07:15.149 }, 00:07:15.149 { 00:07:15.149 "name": "BaseBdev2", 00:07:15.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.149 "is_configured": false, 00:07:15.149 "data_offset": 0, 00:07:15.149 "data_size": 0 00:07:15.149 } 00:07:15.149 ] 00:07:15.149 }' 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.149 13:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.719 [2024-11-20 13:20:57.189019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.719 [2024-11-20 13:20:57.189100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.719 [2024-11-20 13:20:57.197012] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.719 [2024-11-20 13:20:57.197089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.719 [2024-11-20 13:20:57.197115] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.719 [2024-11-20 13:20:57.197151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.719 [2024-11-20 13:20:57.213776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.719 BaseBdev1 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.719 [ 00:07:15.719 { 00:07:15.719 "name": "BaseBdev1", 00:07:15.719 "aliases": [ 00:07:15.719 "5aa3dbb6-a7d1-48b1-9446-dae5c6a432ba" 00:07:15.719 ], 00:07:15.719 "product_name": "Malloc disk", 00:07:15.719 "block_size": 512, 00:07:15.719 "num_blocks": 65536, 00:07:15.719 "uuid": "5aa3dbb6-a7d1-48b1-9446-dae5c6a432ba", 00:07:15.719 "assigned_rate_limits": { 00:07:15.719 "rw_ios_per_sec": 0, 00:07:15.719 "rw_mbytes_per_sec": 0, 00:07:15.719 "r_mbytes_per_sec": 0, 00:07:15.719 "w_mbytes_per_sec": 0 00:07:15.719 }, 00:07:15.719 "claimed": true, 00:07:15.719 "claim_type": "exclusive_write", 00:07:15.719 "zoned": false, 00:07:15.719 "supported_io_types": { 00:07:15.719 "read": true, 00:07:15.719 "write": true, 00:07:15.719 "unmap": true, 00:07:15.719 "flush": true, 00:07:15.719 "reset": true, 00:07:15.719 "nvme_admin": false, 00:07:15.719 "nvme_io": false, 00:07:15.719 "nvme_io_md": false, 00:07:15.719 "write_zeroes": true, 00:07:15.719 "zcopy": true, 00:07:15.719 "get_zone_info": false, 00:07:15.719 "zone_management": false, 00:07:15.719 "zone_append": false, 00:07:15.719 "compare": false, 00:07:15.719 "compare_and_write": false, 00:07:15.719 "abort": true, 00:07:15.719 "seek_hole": false, 00:07:15.719 "seek_data": false, 00:07:15.719 "copy": true, 00:07:15.719 "nvme_iov_md": false 00:07:15.719 }, 00:07:15.719 "memory_domains": [ 00:07:15.719 { 00:07:15.719 "dma_device_id": "system", 00:07:15.719 "dma_device_type": 1 00:07:15.719 }, 00:07:15.719 { 00:07:15.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.719 "dma_device_type": 2 00:07:15.719 } 00:07:15.719 ], 00:07:15.719 "driver_specific": {} 00:07:15.719 } 00:07:15.719 ] 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.719 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.719 "name": "Existed_Raid", 00:07:15.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.719 "strip_size_kb": 64, 00:07:15.719 "state": "configuring", 00:07:15.719 "raid_level": "raid0", 00:07:15.719 "superblock": false, 00:07:15.719 "num_base_bdevs": 2, 00:07:15.719 "num_base_bdevs_discovered": 1, 00:07:15.719 "num_base_bdevs_operational": 2, 00:07:15.719 "base_bdevs_list": [ 00:07:15.719 { 00:07:15.719 "name": "BaseBdev1", 00:07:15.720 "uuid": "5aa3dbb6-a7d1-48b1-9446-dae5c6a432ba", 00:07:15.720 "is_configured": true, 00:07:15.720 "data_offset": 0, 00:07:15.720 "data_size": 65536 00:07:15.720 }, 00:07:15.720 { 00:07:15.720 "name": "BaseBdev2", 00:07:15.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.720 "is_configured": false, 00:07:15.720 "data_offset": 0, 00:07:15.720 "data_size": 0 00:07:15.720 } 00:07:15.720 ] 00:07:15.720 }' 00:07:15.720 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.720 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.285 [2024-11-20 13:20:57.689008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:16.285 [2024-11-20 13:20:57.689135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.285 [2024-11-20 13:20:57.697024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.285 [2024-11-20 13:20:57.698941] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.285 [2024-11-20 13:20:57.699032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.285 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.285 "name": "Existed_Raid", 00:07:16.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.285 "strip_size_kb": 64, 00:07:16.285 "state": "configuring", 00:07:16.285 "raid_level": "raid0", 00:07:16.285 "superblock": false, 00:07:16.285 "num_base_bdevs": 2, 00:07:16.285 "num_base_bdevs_discovered": 1, 00:07:16.285 "num_base_bdevs_operational": 2, 00:07:16.285 "base_bdevs_list": [ 00:07:16.285 { 00:07:16.285 "name": "BaseBdev1", 00:07:16.285 "uuid": "5aa3dbb6-a7d1-48b1-9446-dae5c6a432ba", 00:07:16.285 "is_configured": true, 00:07:16.285 "data_offset": 0, 00:07:16.285 "data_size": 65536 00:07:16.285 }, 00:07:16.285 { 00:07:16.286 "name": "BaseBdev2", 00:07:16.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.286 "is_configured": false, 00:07:16.286 "data_offset": 0, 00:07:16.286 "data_size": 0 00:07:16.286 } 00:07:16.286 ] 00:07:16.286 }' 00:07:16.286 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.286 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.544 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:16.544 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.544 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.544 [2024-11-20 13:20:58.159347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:16.544 [2024-11-20 13:20:58.159479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:16.544 [2024-11-20 13:20:58.159506] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:16.544 [2024-11-20 13:20:58.159826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:16.544 [2024-11-20 13:20:58.160022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:16.545 [2024-11-20 13:20:58.160071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:16.545 [2024-11-20 13:20:58.160321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.545 BaseBdev2 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.545 [ 00:07:16.545 { 00:07:16.545 "name": "BaseBdev2", 00:07:16.545 "aliases": [ 00:07:16.545 "adde9234-263f-4921-80da-cf73eaaaf800" 00:07:16.545 ], 00:07:16.545 "product_name": "Malloc disk", 00:07:16.545 "block_size": 512, 00:07:16.545 "num_blocks": 65536, 00:07:16.545 "uuid": "adde9234-263f-4921-80da-cf73eaaaf800", 00:07:16.545 "assigned_rate_limits": { 00:07:16.545 "rw_ios_per_sec": 0, 00:07:16.545 "rw_mbytes_per_sec": 0, 00:07:16.545 "r_mbytes_per_sec": 0, 00:07:16.545 "w_mbytes_per_sec": 0 00:07:16.545 }, 00:07:16.545 "claimed": true, 00:07:16.545 "claim_type": "exclusive_write", 00:07:16.545 "zoned": false, 00:07:16.545 "supported_io_types": { 00:07:16.545 "read": true, 00:07:16.545 "write": true, 00:07:16.545 "unmap": true, 00:07:16.545 "flush": true, 00:07:16.545 "reset": true, 00:07:16.545 "nvme_admin": false, 00:07:16.545 "nvme_io": false, 00:07:16.545 "nvme_io_md": false, 00:07:16.545 "write_zeroes": true, 00:07:16.545 "zcopy": true, 00:07:16.545 "get_zone_info": false, 00:07:16.545 "zone_management": false, 00:07:16.545 "zone_append": false, 00:07:16.545 "compare": false, 00:07:16.545 "compare_and_write": false, 00:07:16.545 "abort": true, 00:07:16.545 "seek_hole": false, 00:07:16.545 "seek_data": false, 00:07:16.545 "copy": true, 00:07:16.545 "nvme_iov_md": false 00:07:16.545 }, 00:07:16.545 "memory_domains": [ 00:07:16.545 { 00:07:16.545 "dma_device_id": "system", 00:07:16.545 "dma_device_type": 1 00:07:16.545 }, 00:07:16.545 { 00:07:16.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.545 "dma_device_type": 2 00:07:16.545 } 00:07:16.545 ], 00:07:16.545 "driver_specific": {} 00:07:16.545 } 00:07:16.545 ] 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.545 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.803 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.804 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.804 "name": "Existed_Raid", 00:07:16.804 "uuid": "9681c321-0833-418a-a79b-3d0f9d8be801", 00:07:16.804 "strip_size_kb": 64, 00:07:16.804 "state": "online", 00:07:16.804 "raid_level": "raid0", 00:07:16.804 "superblock": false, 00:07:16.804 "num_base_bdevs": 2, 00:07:16.804 "num_base_bdevs_discovered": 2, 00:07:16.804 "num_base_bdevs_operational": 2, 00:07:16.804 "base_bdevs_list": [ 00:07:16.804 { 00:07:16.804 "name": "BaseBdev1", 00:07:16.804 "uuid": "5aa3dbb6-a7d1-48b1-9446-dae5c6a432ba", 00:07:16.804 "is_configured": true, 00:07:16.804 "data_offset": 0, 00:07:16.804 "data_size": 65536 00:07:16.804 }, 00:07:16.804 { 00:07:16.804 "name": "BaseBdev2", 00:07:16.804 "uuid": "adde9234-263f-4921-80da-cf73eaaaf800", 00:07:16.804 "is_configured": true, 00:07:16.804 "data_offset": 0, 00:07:16.804 "data_size": 65536 00:07:16.804 } 00:07:16.804 ] 00:07:16.804 }' 00:07:16.804 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.804 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:17.063 [2024-11-20 13:20:58.586913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:17.063 "name": "Existed_Raid", 00:07:17.063 "aliases": [ 00:07:17.063 "9681c321-0833-418a-a79b-3d0f9d8be801" 00:07:17.063 ], 00:07:17.063 "product_name": "Raid Volume", 00:07:17.063 "block_size": 512, 00:07:17.063 "num_blocks": 131072, 00:07:17.063 "uuid": "9681c321-0833-418a-a79b-3d0f9d8be801", 00:07:17.063 "assigned_rate_limits": { 00:07:17.063 "rw_ios_per_sec": 0, 00:07:17.063 "rw_mbytes_per_sec": 0, 00:07:17.063 "r_mbytes_per_sec": 0, 00:07:17.063 "w_mbytes_per_sec": 0 00:07:17.063 }, 00:07:17.063 "claimed": false, 00:07:17.063 "zoned": false, 00:07:17.063 "supported_io_types": { 00:07:17.063 "read": true, 00:07:17.063 "write": true, 00:07:17.063 "unmap": true, 00:07:17.063 "flush": true, 00:07:17.063 "reset": true, 00:07:17.063 "nvme_admin": false, 00:07:17.063 "nvme_io": false, 00:07:17.063 "nvme_io_md": false, 00:07:17.063 "write_zeroes": true, 00:07:17.063 "zcopy": false, 00:07:17.063 "get_zone_info": false, 00:07:17.063 "zone_management": false, 00:07:17.063 "zone_append": false, 00:07:17.063 "compare": false, 00:07:17.063 "compare_and_write": false, 00:07:17.063 "abort": false, 00:07:17.063 "seek_hole": false, 00:07:17.063 "seek_data": false, 00:07:17.063 "copy": false, 00:07:17.063 "nvme_iov_md": false 00:07:17.063 }, 00:07:17.063 "memory_domains": [ 00:07:17.063 { 00:07:17.063 "dma_device_id": "system", 00:07:17.063 "dma_device_type": 1 00:07:17.063 }, 00:07:17.063 { 00:07:17.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.063 "dma_device_type": 2 00:07:17.063 }, 00:07:17.063 { 00:07:17.063 "dma_device_id": "system", 00:07:17.063 "dma_device_type": 1 00:07:17.063 }, 00:07:17.063 { 00:07:17.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.063 "dma_device_type": 2 00:07:17.063 } 00:07:17.063 ], 00:07:17.063 "driver_specific": { 00:07:17.063 "raid": { 00:07:17.063 "uuid": "9681c321-0833-418a-a79b-3d0f9d8be801", 00:07:17.063 "strip_size_kb": 64, 00:07:17.063 "state": "online", 00:07:17.063 "raid_level": "raid0", 00:07:17.063 "superblock": false, 00:07:17.063 "num_base_bdevs": 2, 00:07:17.063 "num_base_bdevs_discovered": 2, 00:07:17.063 "num_base_bdevs_operational": 2, 00:07:17.063 "base_bdevs_list": [ 00:07:17.063 { 00:07:17.063 "name": "BaseBdev1", 00:07:17.063 "uuid": "5aa3dbb6-a7d1-48b1-9446-dae5c6a432ba", 00:07:17.063 "is_configured": true, 00:07:17.063 "data_offset": 0, 00:07:17.063 "data_size": 65536 00:07:17.063 }, 00:07:17.063 { 00:07:17.063 "name": "BaseBdev2", 00:07:17.063 "uuid": "adde9234-263f-4921-80da-cf73eaaaf800", 00:07:17.063 "is_configured": true, 00:07:17.063 "data_offset": 0, 00:07:17.063 "data_size": 65536 00:07:17.063 } 00:07:17.063 ] 00:07:17.063 } 00:07:17.063 } 00:07:17.063 }' 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:17.063 BaseBdev2' 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.063 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.323 [2024-11-20 13:20:58.786366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:17.323 [2024-11-20 13:20:58.786438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.323 [2024-11-20 13:20:58.786496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.323 "name": "Existed_Raid", 00:07:17.323 "uuid": "9681c321-0833-418a-a79b-3d0f9d8be801", 00:07:17.323 "strip_size_kb": 64, 00:07:17.323 "state": "offline", 00:07:17.323 "raid_level": "raid0", 00:07:17.323 "superblock": false, 00:07:17.323 "num_base_bdevs": 2, 00:07:17.323 "num_base_bdevs_discovered": 1, 00:07:17.323 "num_base_bdevs_operational": 1, 00:07:17.323 "base_bdevs_list": [ 00:07:17.323 { 00:07:17.323 "name": null, 00:07:17.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.323 "is_configured": false, 00:07:17.323 "data_offset": 0, 00:07:17.323 "data_size": 65536 00:07:17.323 }, 00:07:17.323 { 00:07:17.323 "name": "BaseBdev2", 00:07:17.323 "uuid": "adde9234-263f-4921-80da-cf73eaaaf800", 00:07:17.323 "is_configured": true, 00:07:17.323 "data_offset": 0, 00:07:17.323 "data_size": 65536 00:07:17.323 } 00:07:17.323 ] 00:07:17.323 }' 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.323 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.892 [2024-11-20 13:20:59.332757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:17.892 [2024-11-20 13:20:59.332850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:17.892 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71823 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71823 ']' 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71823 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71823 00:07:17.893 killing process with pid 71823 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71823' 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71823 00:07:17.893 [2024-11-20 13:20:59.444886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.893 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71823 00:07:17.893 [2024-11-20 13:20:59.445856] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.152 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:18.152 00:07:18.152 real 0m3.801s 00:07:18.152 user 0m6.067s 00:07:18.152 sys 0m0.710s 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.153 ************************************ 00:07:18.153 END TEST raid_state_function_test 00:07:18.153 ************************************ 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.153 13:20:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:18.153 13:20:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:18.153 13:20:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.153 13:20:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.153 ************************************ 00:07:18.153 START TEST raid_state_function_test_sb 00:07:18.153 ************************************ 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:18.153 Process raid pid: 72060 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72060 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72060' 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72060 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72060 ']' 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.153 13:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.153 [2024-11-20 13:20:59.809438] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:18.153 [2024-11-20 13:20:59.810066] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.412 [2024-11-20 13:20:59.963357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.412 [2024-11-20 13:20:59.989282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.412 [2024-11-20 13:21:00.031688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.412 [2024-11-20 13:21:00.031800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.980 [2024-11-20 13:21:00.624848] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:18.980 [2024-11-20 13:21:00.624986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:18.980 [2024-11-20 13:21:00.625023] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.980 [2024-11-20 13:21:00.625036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.980 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.981 13:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.981 13:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.240 13:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.240 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.240 "name": "Existed_Raid", 00:07:19.240 "uuid": "7e984fde-4dbc-4016-995a-dee7cebcdea6", 00:07:19.240 "strip_size_kb": 64, 00:07:19.240 "state": "configuring", 00:07:19.240 "raid_level": "raid0", 00:07:19.240 "superblock": true, 00:07:19.240 "num_base_bdevs": 2, 00:07:19.240 "num_base_bdevs_discovered": 0, 00:07:19.240 "num_base_bdevs_operational": 2, 00:07:19.240 "base_bdevs_list": [ 00:07:19.240 { 00:07:19.240 "name": "BaseBdev1", 00:07:19.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.240 "is_configured": false, 00:07:19.240 "data_offset": 0, 00:07:19.240 "data_size": 0 00:07:19.240 }, 00:07:19.240 { 00:07:19.240 "name": "BaseBdev2", 00:07:19.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.240 "is_configured": false, 00:07:19.240 "data_offset": 0, 00:07:19.240 "data_size": 0 00:07:19.240 } 00:07:19.240 ] 00:07:19.240 }' 00:07:19.240 13:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.240 13:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.510 [2024-11-20 13:21:01.048032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.510 [2024-11-20 13:21:01.048071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.510 [2024-11-20 13:21:01.056030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.510 [2024-11-20 13:21:01.056069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.510 [2024-11-20 13:21:01.056078] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.510 [2024-11-20 13:21:01.056096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.510 [2024-11-20 13:21:01.072977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.510 BaseBdev1 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.510 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.510 [ 00:07:19.510 { 00:07:19.510 "name": "BaseBdev1", 00:07:19.510 "aliases": [ 00:07:19.510 "6d26b035-a8b8-4208-a8d3-c5e6bafc41f4" 00:07:19.510 ], 00:07:19.510 "product_name": "Malloc disk", 00:07:19.510 "block_size": 512, 00:07:19.510 "num_blocks": 65536, 00:07:19.510 "uuid": "6d26b035-a8b8-4208-a8d3-c5e6bafc41f4", 00:07:19.510 "assigned_rate_limits": { 00:07:19.510 "rw_ios_per_sec": 0, 00:07:19.510 "rw_mbytes_per_sec": 0, 00:07:19.510 "r_mbytes_per_sec": 0, 00:07:19.510 "w_mbytes_per_sec": 0 00:07:19.510 }, 00:07:19.510 "claimed": true, 00:07:19.510 "claim_type": "exclusive_write", 00:07:19.510 "zoned": false, 00:07:19.510 "supported_io_types": { 00:07:19.510 "read": true, 00:07:19.510 "write": true, 00:07:19.510 "unmap": true, 00:07:19.510 "flush": true, 00:07:19.510 "reset": true, 00:07:19.510 "nvme_admin": false, 00:07:19.510 "nvme_io": false, 00:07:19.510 "nvme_io_md": false, 00:07:19.510 "write_zeroes": true, 00:07:19.510 "zcopy": true, 00:07:19.510 "get_zone_info": false, 00:07:19.510 "zone_management": false, 00:07:19.510 "zone_append": false, 00:07:19.510 "compare": false, 00:07:19.510 "compare_and_write": false, 00:07:19.510 "abort": true, 00:07:19.511 "seek_hole": false, 00:07:19.511 "seek_data": false, 00:07:19.511 "copy": true, 00:07:19.511 "nvme_iov_md": false 00:07:19.511 }, 00:07:19.511 "memory_domains": [ 00:07:19.511 { 00:07:19.511 "dma_device_id": "system", 00:07:19.511 "dma_device_type": 1 00:07:19.511 }, 00:07:19.511 { 00:07:19.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.511 "dma_device_type": 2 00:07:19.511 } 00:07:19.511 ], 00:07:19.511 "driver_specific": {} 00:07:19.511 } 00:07:19.511 ] 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.511 "name": "Existed_Raid", 00:07:19.511 "uuid": "ff2bbb03-01ac-4357-8674-878ade391160", 00:07:19.511 "strip_size_kb": 64, 00:07:19.511 "state": "configuring", 00:07:19.511 "raid_level": "raid0", 00:07:19.511 "superblock": true, 00:07:19.511 "num_base_bdevs": 2, 00:07:19.511 "num_base_bdevs_discovered": 1, 00:07:19.511 "num_base_bdevs_operational": 2, 00:07:19.511 "base_bdevs_list": [ 00:07:19.511 { 00:07:19.511 "name": "BaseBdev1", 00:07:19.511 "uuid": "6d26b035-a8b8-4208-a8d3-c5e6bafc41f4", 00:07:19.511 "is_configured": true, 00:07:19.511 "data_offset": 2048, 00:07:19.511 "data_size": 63488 00:07:19.511 }, 00:07:19.511 { 00:07:19.511 "name": "BaseBdev2", 00:07:19.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.511 "is_configured": false, 00:07:19.511 "data_offset": 0, 00:07:19.511 "data_size": 0 00:07:19.511 } 00:07:19.511 ] 00:07:19.511 }' 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.511 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.097 [2024-11-20 13:21:01.532248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.097 [2024-11-20 13:21:01.532354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.097 [2024-11-20 13:21:01.540253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.097 [2024-11-20 13:21:01.542177] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.097 [2024-11-20 13:21:01.542248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.097 "name": "Existed_Raid", 00:07:20.097 "uuid": "d7932135-a24b-4885-b4aa-660c3323abe0", 00:07:20.097 "strip_size_kb": 64, 00:07:20.097 "state": "configuring", 00:07:20.097 "raid_level": "raid0", 00:07:20.097 "superblock": true, 00:07:20.097 "num_base_bdevs": 2, 00:07:20.097 "num_base_bdevs_discovered": 1, 00:07:20.097 "num_base_bdevs_operational": 2, 00:07:20.097 "base_bdevs_list": [ 00:07:20.097 { 00:07:20.097 "name": "BaseBdev1", 00:07:20.097 "uuid": "6d26b035-a8b8-4208-a8d3-c5e6bafc41f4", 00:07:20.097 "is_configured": true, 00:07:20.097 "data_offset": 2048, 00:07:20.097 "data_size": 63488 00:07:20.097 }, 00:07:20.097 { 00:07:20.097 "name": "BaseBdev2", 00:07:20.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.097 "is_configured": false, 00:07:20.097 "data_offset": 0, 00:07:20.097 "data_size": 0 00:07:20.097 } 00:07:20.097 ] 00:07:20.097 }' 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.097 13:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.357 BaseBdev2 00:07:20.357 [2024-11-20 13:21:02.018373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:20.357 [2024-11-20 13:21:02.018554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:20.357 [2024-11-20 13:21:02.018569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:20.357 [2024-11-20 13:21:02.018832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:20.357 [2024-11-20 13:21:02.018977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:20.357 [2024-11-20 13:21:02.019035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:20.357 [2024-11-20 13:21:02.019166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.357 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.616 [ 00:07:20.616 { 00:07:20.616 "name": "BaseBdev2", 00:07:20.616 "aliases": [ 00:07:20.616 "815e0ee3-bfa2-471a-9074-56b6801c4996" 00:07:20.616 ], 00:07:20.616 "product_name": "Malloc disk", 00:07:20.616 "block_size": 512, 00:07:20.616 "num_blocks": 65536, 00:07:20.616 "uuid": "815e0ee3-bfa2-471a-9074-56b6801c4996", 00:07:20.616 "assigned_rate_limits": { 00:07:20.616 "rw_ios_per_sec": 0, 00:07:20.616 "rw_mbytes_per_sec": 0, 00:07:20.616 "r_mbytes_per_sec": 0, 00:07:20.616 "w_mbytes_per_sec": 0 00:07:20.616 }, 00:07:20.616 "claimed": true, 00:07:20.616 "claim_type": "exclusive_write", 00:07:20.616 "zoned": false, 00:07:20.616 "supported_io_types": { 00:07:20.616 "read": true, 00:07:20.616 "write": true, 00:07:20.616 "unmap": true, 00:07:20.616 "flush": true, 00:07:20.616 "reset": true, 00:07:20.616 "nvme_admin": false, 00:07:20.616 "nvme_io": false, 00:07:20.616 "nvme_io_md": false, 00:07:20.616 "write_zeroes": true, 00:07:20.616 "zcopy": true, 00:07:20.616 "get_zone_info": false, 00:07:20.616 "zone_management": false, 00:07:20.616 "zone_append": false, 00:07:20.616 "compare": false, 00:07:20.616 "compare_and_write": false, 00:07:20.616 "abort": true, 00:07:20.616 "seek_hole": false, 00:07:20.616 "seek_data": false, 00:07:20.616 "copy": true, 00:07:20.616 "nvme_iov_md": false 00:07:20.616 }, 00:07:20.616 "memory_domains": [ 00:07:20.616 { 00:07:20.616 "dma_device_id": "system", 00:07:20.616 "dma_device_type": 1 00:07:20.616 }, 00:07:20.616 { 00:07:20.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.616 "dma_device_type": 2 00:07:20.616 } 00:07:20.616 ], 00:07:20.616 "driver_specific": {} 00:07:20.616 } 00:07:20.616 ] 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.616 "name": "Existed_Raid", 00:07:20.616 "uuid": "d7932135-a24b-4885-b4aa-660c3323abe0", 00:07:20.616 "strip_size_kb": 64, 00:07:20.616 "state": "online", 00:07:20.616 "raid_level": "raid0", 00:07:20.616 "superblock": true, 00:07:20.616 "num_base_bdevs": 2, 00:07:20.616 "num_base_bdevs_discovered": 2, 00:07:20.616 "num_base_bdevs_operational": 2, 00:07:20.616 "base_bdevs_list": [ 00:07:20.616 { 00:07:20.616 "name": "BaseBdev1", 00:07:20.616 "uuid": "6d26b035-a8b8-4208-a8d3-c5e6bafc41f4", 00:07:20.616 "is_configured": true, 00:07:20.616 "data_offset": 2048, 00:07:20.616 "data_size": 63488 00:07:20.616 }, 00:07:20.616 { 00:07:20.616 "name": "BaseBdev2", 00:07:20.616 "uuid": "815e0ee3-bfa2-471a-9074-56b6801c4996", 00:07:20.616 "is_configured": true, 00:07:20.616 "data_offset": 2048, 00:07:20.616 "data_size": 63488 00:07:20.616 } 00:07:20.616 ] 00:07:20.616 }' 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.616 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.876 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:20.876 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:20.876 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.876 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.876 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.876 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.876 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:20.876 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.876 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.876 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.876 [2024-11-20 13:21:02.501820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.876 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.139 "name": "Existed_Raid", 00:07:21.139 "aliases": [ 00:07:21.139 "d7932135-a24b-4885-b4aa-660c3323abe0" 00:07:21.139 ], 00:07:21.139 "product_name": "Raid Volume", 00:07:21.139 "block_size": 512, 00:07:21.139 "num_blocks": 126976, 00:07:21.139 "uuid": "d7932135-a24b-4885-b4aa-660c3323abe0", 00:07:21.139 "assigned_rate_limits": { 00:07:21.139 "rw_ios_per_sec": 0, 00:07:21.139 "rw_mbytes_per_sec": 0, 00:07:21.139 "r_mbytes_per_sec": 0, 00:07:21.139 "w_mbytes_per_sec": 0 00:07:21.139 }, 00:07:21.139 "claimed": false, 00:07:21.139 "zoned": false, 00:07:21.139 "supported_io_types": { 00:07:21.139 "read": true, 00:07:21.139 "write": true, 00:07:21.139 "unmap": true, 00:07:21.139 "flush": true, 00:07:21.139 "reset": true, 00:07:21.139 "nvme_admin": false, 00:07:21.139 "nvme_io": false, 00:07:21.139 "nvme_io_md": false, 00:07:21.139 "write_zeroes": true, 00:07:21.139 "zcopy": false, 00:07:21.139 "get_zone_info": false, 00:07:21.139 "zone_management": false, 00:07:21.139 "zone_append": false, 00:07:21.139 "compare": false, 00:07:21.139 "compare_and_write": false, 00:07:21.139 "abort": false, 00:07:21.139 "seek_hole": false, 00:07:21.139 "seek_data": false, 00:07:21.139 "copy": false, 00:07:21.139 "nvme_iov_md": false 00:07:21.139 }, 00:07:21.139 "memory_domains": [ 00:07:21.139 { 00:07:21.139 "dma_device_id": "system", 00:07:21.139 "dma_device_type": 1 00:07:21.139 }, 00:07:21.139 { 00:07:21.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.139 "dma_device_type": 2 00:07:21.139 }, 00:07:21.139 { 00:07:21.139 "dma_device_id": "system", 00:07:21.139 "dma_device_type": 1 00:07:21.139 }, 00:07:21.139 { 00:07:21.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.139 "dma_device_type": 2 00:07:21.139 } 00:07:21.139 ], 00:07:21.139 "driver_specific": { 00:07:21.139 "raid": { 00:07:21.139 "uuid": "d7932135-a24b-4885-b4aa-660c3323abe0", 00:07:21.139 "strip_size_kb": 64, 00:07:21.139 "state": "online", 00:07:21.139 "raid_level": "raid0", 00:07:21.139 "superblock": true, 00:07:21.139 "num_base_bdevs": 2, 00:07:21.139 "num_base_bdevs_discovered": 2, 00:07:21.139 "num_base_bdevs_operational": 2, 00:07:21.139 "base_bdevs_list": [ 00:07:21.139 { 00:07:21.139 "name": "BaseBdev1", 00:07:21.139 "uuid": "6d26b035-a8b8-4208-a8d3-c5e6bafc41f4", 00:07:21.139 "is_configured": true, 00:07:21.139 "data_offset": 2048, 00:07:21.139 "data_size": 63488 00:07:21.139 }, 00:07:21.139 { 00:07:21.139 "name": "BaseBdev2", 00:07:21.139 "uuid": "815e0ee3-bfa2-471a-9074-56b6801c4996", 00:07:21.139 "is_configured": true, 00:07:21.139 "data_offset": 2048, 00:07:21.139 "data_size": 63488 00:07:21.139 } 00:07:21.139 ] 00:07:21.139 } 00:07:21.139 } 00:07:21.139 }' 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:21.139 BaseBdev2' 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.139 [2024-11-20 13:21:02.713270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:21.139 [2024-11-20 13:21:02.713343] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.139 [2024-11-20 13:21:02.713397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.139 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.139 "name": "Existed_Raid", 00:07:21.139 "uuid": "d7932135-a24b-4885-b4aa-660c3323abe0", 00:07:21.139 "strip_size_kb": 64, 00:07:21.139 "state": "offline", 00:07:21.139 "raid_level": "raid0", 00:07:21.139 "superblock": true, 00:07:21.139 "num_base_bdevs": 2, 00:07:21.139 "num_base_bdevs_discovered": 1, 00:07:21.139 "num_base_bdevs_operational": 1, 00:07:21.139 "base_bdevs_list": [ 00:07:21.139 { 00:07:21.139 "name": null, 00:07:21.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.139 "is_configured": false, 00:07:21.139 "data_offset": 0, 00:07:21.139 "data_size": 63488 00:07:21.139 }, 00:07:21.140 { 00:07:21.140 "name": "BaseBdev2", 00:07:21.140 "uuid": "815e0ee3-bfa2-471a-9074-56b6801c4996", 00:07:21.140 "is_configured": true, 00:07:21.140 "data_offset": 2048, 00:07:21.140 "data_size": 63488 00:07:21.140 } 00:07:21.140 ] 00:07:21.140 }' 00:07:21.140 13:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.140 13:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.712 [2024-11-20 13:21:03.227763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:21.712 [2024-11-20 13:21:03.227874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72060 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72060 ']' 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72060 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72060 00:07:21.712 killing process with pid 72060 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72060' 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72060 00:07:21.712 [2024-11-20 13:21:03.319597] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.712 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72060 00:07:21.712 [2024-11-20 13:21:03.320591] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.972 ************************************ 00:07:21.972 END TEST raid_state_function_test_sb 00:07:21.972 ************************************ 00:07:21.972 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:21.972 00:07:21.972 real 0m3.811s 00:07:21.972 user 0m6.089s 00:07:21.972 sys 0m0.715s 00:07:21.972 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.972 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.972 13:21:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:21.972 13:21:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:21.972 13:21:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.972 13:21:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.972 ************************************ 00:07:21.972 START TEST raid_superblock_test 00:07:21.972 ************************************ 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72296 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72296 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72296 ']' 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.972 13:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.230 [2024-11-20 13:21:03.687491] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:22.230 [2024-11-20 13:21:03.687707] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72296 ] 00:07:22.230 [2024-11-20 13:21:03.843439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.230 [2024-11-20 13:21:03.870148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.489 [2024-11-20 13:21:03.912686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.489 [2024-11-20 13:21:03.912798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.058 malloc1 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.058 [2024-11-20 13:21:04.534556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:23.058 [2024-11-20 13:21:04.534662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.058 [2024-11-20 13:21:04.534721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:23.058 [2024-11-20 13:21:04.534758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.058 [2024-11-20 13:21:04.536880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.058 [2024-11-20 13:21:04.536960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:23.058 pt1 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.058 malloc2 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.058 [2024-11-20 13:21:04.563037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:23.058 [2024-11-20 13:21:04.563141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.058 [2024-11-20 13:21:04.563160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:23.058 [2024-11-20 13:21:04.563170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.058 [2024-11-20 13:21:04.565159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.058 [2024-11-20 13:21:04.565196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:23.058 pt2 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.058 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.058 [2024-11-20 13:21:04.575054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:23.058 [2024-11-20 13:21:04.576802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:23.058 [2024-11-20 13:21:04.576937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:23.058 [2024-11-20 13:21:04.576955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.058 [2024-11-20 13:21:04.577221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:23.058 [2024-11-20 13:21:04.577418] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:23.058 [2024-11-20 13:21:04.577433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:23.058 [2024-11-20 13:21:04.577546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.059 "name": "raid_bdev1", 00:07:23.059 "uuid": "72bce581-7b4b-4859-b2a1-70186019c5cc", 00:07:23.059 "strip_size_kb": 64, 00:07:23.059 "state": "online", 00:07:23.059 "raid_level": "raid0", 00:07:23.059 "superblock": true, 00:07:23.059 "num_base_bdevs": 2, 00:07:23.059 "num_base_bdevs_discovered": 2, 00:07:23.059 "num_base_bdevs_operational": 2, 00:07:23.059 "base_bdevs_list": [ 00:07:23.059 { 00:07:23.059 "name": "pt1", 00:07:23.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:23.059 "is_configured": true, 00:07:23.059 "data_offset": 2048, 00:07:23.059 "data_size": 63488 00:07:23.059 }, 00:07:23.059 { 00:07:23.059 "name": "pt2", 00:07:23.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:23.059 "is_configured": true, 00:07:23.059 "data_offset": 2048, 00:07:23.059 "data_size": 63488 00:07:23.059 } 00:07:23.059 ] 00:07:23.059 }' 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.059 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.627 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:23.627 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:23.627 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:23.627 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:23.627 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:23.627 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:23.627 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:23.627 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.627 13:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.627 13:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:23.627 [2024-11-20 13:21:05.006593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:23.627 "name": "raid_bdev1", 00:07:23.627 "aliases": [ 00:07:23.627 "72bce581-7b4b-4859-b2a1-70186019c5cc" 00:07:23.627 ], 00:07:23.627 "product_name": "Raid Volume", 00:07:23.627 "block_size": 512, 00:07:23.627 "num_blocks": 126976, 00:07:23.627 "uuid": "72bce581-7b4b-4859-b2a1-70186019c5cc", 00:07:23.627 "assigned_rate_limits": { 00:07:23.627 "rw_ios_per_sec": 0, 00:07:23.627 "rw_mbytes_per_sec": 0, 00:07:23.627 "r_mbytes_per_sec": 0, 00:07:23.627 "w_mbytes_per_sec": 0 00:07:23.627 }, 00:07:23.627 "claimed": false, 00:07:23.627 "zoned": false, 00:07:23.627 "supported_io_types": { 00:07:23.627 "read": true, 00:07:23.627 "write": true, 00:07:23.627 "unmap": true, 00:07:23.627 "flush": true, 00:07:23.627 "reset": true, 00:07:23.627 "nvme_admin": false, 00:07:23.627 "nvme_io": false, 00:07:23.627 "nvme_io_md": false, 00:07:23.627 "write_zeroes": true, 00:07:23.627 "zcopy": false, 00:07:23.627 "get_zone_info": false, 00:07:23.627 "zone_management": false, 00:07:23.627 "zone_append": false, 00:07:23.627 "compare": false, 00:07:23.627 "compare_and_write": false, 00:07:23.627 "abort": false, 00:07:23.627 "seek_hole": false, 00:07:23.627 "seek_data": false, 00:07:23.627 "copy": false, 00:07:23.627 "nvme_iov_md": false 00:07:23.627 }, 00:07:23.627 "memory_domains": [ 00:07:23.627 { 00:07:23.627 "dma_device_id": "system", 00:07:23.627 "dma_device_type": 1 00:07:23.627 }, 00:07:23.627 { 00:07:23.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.627 "dma_device_type": 2 00:07:23.627 }, 00:07:23.627 { 00:07:23.627 "dma_device_id": "system", 00:07:23.627 "dma_device_type": 1 00:07:23.627 }, 00:07:23.627 { 00:07:23.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.627 "dma_device_type": 2 00:07:23.627 } 00:07:23.627 ], 00:07:23.627 "driver_specific": { 00:07:23.627 "raid": { 00:07:23.627 "uuid": "72bce581-7b4b-4859-b2a1-70186019c5cc", 00:07:23.627 "strip_size_kb": 64, 00:07:23.627 "state": "online", 00:07:23.627 "raid_level": "raid0", 00:07:23.627 "superblock": true, 00:07:23.627 "num_base_bdevs": 2, 00:07:23.627 "num_base_bdevs_discovered": 2, 00:07:23.627 "num_base_bdevs_operational": 2, 00:07:23.627 "base_bdevs_list": [ 00:07:23.627 { 00:07:23.627 "name": "pt1", 00:07:23.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:23.627 "is_configured": true, 00:07:23.627 "data_offset": 2048, 00:07:23.627 "data_size": 63488 00:07:23.627 }, 00:07:23.627 { 00:07:23.627 "name": "pt2", 00:07:23.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:23.627 "is_configured": true, 00:07:23.627 "data_offset": 2048, 00:07:23.627 "data_size": 63488 00:07:23.627 } 00:07:23.627 ] 00:07:23.627 } 00:07:23.627 } 00:07:23.627 }' 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:23.627 pt2' 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.627 [2024-11-20 13:21:05.242086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=72bce581-7b4b-4859-b2a1-70186019c5cc 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 72bce581-7b4b-4859-b2a1-70186019c5cc ']' 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.627 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.627 [2024-11-20 13:21:05.289773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:23.627 [2024-11-20 13:21:05.289799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.627 [2024-11-20 13:21:05.289882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.627 [2024-11-20 13:21:05.289932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.627 [2024-11-20 13:21:05.289940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.887 [2024-11-20 13:21:05.425552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:23.887 [2024-11-20 13:21:05.427389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:23.887 [2024-11-20 13:21:05.427467] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:23.887 [2024-11-20 13:21:05.427505] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:23.887 [2024-11-20 13:21:05.427520] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:23.887 [2024-11-20 13:21:05.427528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:23.887 request: 00:07:23.887 { 00:07:23.887 "name": "raid_bdev1", 00:07:23.887 "raid_level": "raid0", 00:07:23.887 "base_bdevs": [ 00:07:23.887 "malloc1", 00:07:23.887 "malloc2" 00:07:23.887 ], 00:07:23.887 "strip_size_kb": 64, 00:07:23.887 "superblock": false, 00:07:23.887 "method": "bdev_raid_create", 00:07:23.887 "req_id": 1 00:07:23.887 } 00:07:23.887 Got JSON-RPC error response 00:07:23.887 response: 00:07:23.887 { 00:07:23.887 "code": -17, 00:07:23.887 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:23.887 } 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.887 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.888 [2024-11-20 13:21:05.489423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:23.888 [2024-11-20 13:21:05.489471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.888 [2024-11-20 13:21:05.489487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:23.888 [2024-11-20 13:21:05.489495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.888 [2024-11-20 13:21:05.491565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.888 [2024-11-20 13:21:05.491601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:23.888 [2024-11-20 13:21:05.491664] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:23.888 [2024-11-20 13:21:05.491716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:23.888 pt1 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.888 "name": "raid_bdev1", 00:07:23.888 "uuid": "72bce581-7b4b-4859-b2a1-70186019c5cc", 00:07:23.888 "strip_size_kb": 64, 00:07:23.888 "state": "configuring", 00:07:23.888 "raid_level": "raid0", 00:07:23.888 "superblock": true, 00:07:23.888 "num_base_bdevs": 2, 00:07:23.888 "num_base_bdevs_discovered": 1, 00:07:23.888 "num_base_bdevs_operational": 2, 00:07:23.888 "base_bdevs_list": [ 00:07:23.888 { 00:07:23.888 "name": "pt1", 00:07:23.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:23.888 "is_configured": true, 00:07:23.888 "data_offset": 2048, 00:07:23.888 "data_size": 63488 00:07:23.888 }, 00:07:23.888 { 00:07:23.888 "name": null, 00:07:23.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:23.888 "is_configured": false, 00:07:23.888 "data_offset": 2048, 00:07:23.888 "data_size": 63488 00:07:23.888 } 00:07:23.888 ] 00:07:23.888 }' 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.888 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.457 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:24.457 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:24.457 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:24.457 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:24.457 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.457 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.457 [2024-11-20 13:21:05.888735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:24.457 [2024-11-20 13:21:05.888845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.457 [2024-11-20 13:21:05.888885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:24.457 [2024-11-20 13:21:05.888911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.457 [2024-11-20 13:21:05.889339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.457 [2024-11-20 13:21:05.889402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:24.457 [2024-11-20 13:21:05.889505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:24.457 [2024-11-20 13:21:05.889553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:24.457 [2024-11-20 13:21:05.889659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:24.457 [2024-11-20 13:21:05.889695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.457 [2024-11-20 13:21:05.889957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:24.458 [2024-11-20 13:21:05.890116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:24.458 [2024-11-20 13:21:05.890165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:24.458 [2024-11-20 13:21:05.890301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.458 pt2 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.458 "name": "raid_bdev1", 00:07:24.458 "uuid": "72bce581-7b4b-4859-b2a1-70186019c5cc", 00:07:24.458 "strip_size_kb": 64, 00:07:24.458 "state": "online", 00:07:24.458 "raid_level": "raid0", 00:07:24.458 "superblock": true, 00:07:24.458 "num_base_bdevs": 2, 00:07:24.458 "num_base_bdevs_discovered": 2, 00:07:24.458 "num_base_bdevs_operational": 2, 00:07:24.458 "base_bdevs_list": [ 00:07:24.458 { 00:07:24.458 "name": "pt1", 00:07:24.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:24.458 "is_configured": true, 00:07:24.458 "data_offset": 2048, 00:07:24.458 "data_size": 63488 00:07:24.458 }, 00:07:24.458 { 00:07:24.458 "name": "pt2", 00:07:24.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:24.458 "is_configured": true, 00:07:24.458 "data_offset": 2048, 00:07:24.458 "data_size": 63488 00:07:24.458 } 00:07:24.458 ] 00:07:24.458 }' 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.458 13:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.717 [2024-11-20 13:21:06.248405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.717 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:24.717 "name": "raid_bdev1", 00:07:24.717 "aliases": [ 00:07:24.717 "72bce581-7b4b-4859-b2a1-70186019c5cc" 00:07:24.717 ], 00:07:24.717 "product_name": "Raid Volume", 00:07:24.717 "block_size": 512, 00:07:24.717 "num_blocks": 126976, 00:07:24.717 "uuid": "72bce581-7b4b-4859-b2a1-70186019c5cc", 00:07:24.717 "assigned_rate_limits": { 00:07:24.717 "rw_ios_per_sec": 0, 00:07:24.717 "rw_mbytes_per_sec": 0, 00:07:24.717 "r_mbytes_per_sec": 0, 00:07:24.717 "w_mbytes_per_sec": 0 00:07:24.717 }, 00:07:24.718 "claimed": false, 00:07:24.718 "zoned": false, 00:07:24.718 "supported_io_types": { 00:07:24.718 "read": true, 00:07:24.718 "write": true, 00:07:24.718 "unmap": true, 00:07:24.718 "flush": true, 00:07:24.718 "reset": true, 00:07:24.718 "nvme_admin": false, 00:07:24.718 "nvme_io": false, 00:07:24.718 "nvme_io_md": false, 00:07:24.718 "write_zeroes": true, 00:07:24.718 "zcopy": false, 00:07:24.718 "get_zone_info": false, 00:07:24.718 "zone_management": false, 00:07:24.718 "zone_append": false, 00:07:24.718 "compare": false, 00:07:24.718 "compare_and_write": false, 00:07:24.718 "abort": false, 00:07:24.718 "seek_hole": false, 00:07:24.718 "seek_data": false, 00:07:24.718 "copy": false, 00:07:24.718 "nvme_iov_md": false 00:07:24.718 }, 00:07:24.718 "memory_domains": [ 00:07:24.718 { 00:07:24.718 "dma_device_id": "system", 00:07:24.718 "dma_device_type": 1 00:07:24.718 }, 00:07:24.718 { 00:07:24.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.718 "dma_device_type": 2 00:07:24.718 }, 00:07:24.718 { 00:07:24.718 "dma_device_id": "system", 00:07:24.718 "dma_device_type": 1 00:07:24.718 }, 00:07:24.718 { 00:07:24.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.718 "dma_device_type": 2 00:07:24.718 } 00:07:24.718 ], 00:07:24.718 "driver_specific": { 00:07:24.718 "raid": { 00:07:24.718 "uuid": "72bce581-7b4b-4859-b2a1-70186019c5cc", 00:07:24.718 "strip_size_kb": 64, 00:07:24.718 "state": "online", 00:07:24.718 "raid_level": "raid0", 00:07:24.718 "superblock": true, 00:07:24.718 "num_base_bdevs": 2, 00:07:24.718 "num_base_bdevs_discovered": 2, 00:07:24.718 "num_base_bdevs_operational": 2, 00:07:24.718 "base_bdevs_list": [ 00:07:24.718 { 00:07:24.718 "name": "pt1", 00:07:24.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:24.718 "is_configured": true, 00:07:24.718 "data_offset": 2048, 00:07:24.718 "data_size": 63488 00:07:24.718 }, 00:07:24.718 { 00:07:24.718 "name": "pt2", 00:07:24.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:24.718 "is_configured": true, 00:07:24.718 "data_offset": 2048, 00:07:24.718 "data_size": 63488 00:07:24.718 } 00:07:24.718 ] 00:07:24.718 } 00:07:24.718 } 00:07:24.718 }' 00:07:24.718 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:24.718 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:24.718 pt2' 00:07:24.718 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.718 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:24.718 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.718 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:24.718 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.718 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.718 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.977 [2024-11-20 13:21:06.456015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 72bce581-7b4b-4859-b2a1-70186019c5cc '!=' 72bce581-7b4b-4859-b2a1-70186019c5cc ']' 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72296 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72296 ']' 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72296 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72296 00:07:24.977 killing process with pid 72296 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72296' 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72296 00:07:24.977 [2024-11-20 13:21:06.525938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.977 [2024-11-20 13:21:06.526032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.977 [2024-11-20 13:21:06.526081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.977 [2024-11-20 13:21:06.526090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:24.977 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72296 00:07:24.977 [2024-11-20 13:21:06.548541] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.236 13:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:25.236 00:07:25.236 real 0m3.152s 00:07:25.236 user 0m4.883s 00:07:25.236 sys 0m0.647s 00:07:25.236 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.236 ************************************ 00:07:25.236 END TEST raid_superblock_test 00:07:25.236 ************************************ 00:07:25.236 13:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.236 13:21:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:25.236 13:21:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:25.236 13:21:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.236 13:21:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.236 ************************************ 00:07:25.236 START TEST raid_read_error_test 00:07:25.236 ************************************ 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2S4XhRjs0R 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72491 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72491 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72491 ']' 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.236 13:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.237 13:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.237 13:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.496 [2024-11-20 13:21:06.927687] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:25.496 [2024-11-20 13:21:06.927921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72491 ] 00:07:25.496 [2024-11-20 13:21:07.080693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.496 [2024-11-20 13:21:07.104898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.496 [2024-11-20 13:21:07.147178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.496 [2024-11-20 13:21:07.147283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.432 BaseBdev1_malloc 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.432 true 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.432 [2024-11-20 13:21:07.793606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:26.432 [2024-11-20 13:21:07.793658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.432 [2024-11-20 13:21:07.793692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:26.432 [2024-11-20 13:21:07.793700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.432 [2024-11-20 13:21:07.795779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.432 [2024-11-20 13:21:07.795816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:26.432 BaseBdev1 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.432 BaseBdev2_malloc 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.432 true 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.432 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.433 [2024-11-20 13:21:07.834243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:26.433 [2024-11-20 13:21:07.834343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.433 [2024-11-20 13:21:07.834365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:26.433 [2024-11-20 13:21:07.834382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.433 [2024-11-20 13:21:07.836450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.433 [2024-11-20 13:21:07.836487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:26.433 BaseBdev2 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.433 [2024-11-20 13:21:07.846289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.433 [2024-11-20 13:21:07.848128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.433 [2024-11-20 13:21:07.848368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:26.433 [2024-11-20 13:21:07.848390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.433 [2024-11-20 13:21:07.848640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:26.433 [2024-11-20 13:21:07.848766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:26.433 [2024-11-20 13:21:07.848779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:26.433 [2024-11-20 13:21:07.848900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.433 "name": "raid_bdev1", 00:07:26.433 "uuid": "1e52050b-2b56-4e34-9da8-41237dcccb49", 00:07:26.433 "strip_size_kb": 64, 00:07:26.433 "state": "online", 00:07:26.433 "raid_level": "raid0", 00:07:26.433 "superblock": true, 00:07:26.433 "num_base_bdevs": 2, 00:07:26.433 "num_base_bdevs_discovered": 2, 00:07:26.433 "num_base_bdevs_operational": 2, 00:07:26.433 "base_bdevs_list": [ 00:07:26.433 { 00:07:26.433 "name": "BaseBdev1", 00:07:26.433 "uuid": "36bb8bd8-2cd6-5fba-9fbb-555e1f08984c", 00:07:26.433 "is_configured": true, 00:07:26.433 "data_offset": 2048, 00:07:26.433 "data_size": 63488 00:07:26.433 }, 00:07:26.433 { 00:07:26.433 "name": "BaseBdev2", 00:07:26.433 "uuid": "f6c178de-ca39-558c-ba8b-f4855b49116a", 00:07:26.433 "is_configured": true, 00:07:26.433 "data_offset": 2048, 00:07:26.433 "data_size": 63488 00:07:26.433 } 00:07:26.433 ] 00:07:26.433 }' 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.433 13:21:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.695 13:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:26.695 13:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:26.695 [2024-11-20 13:21:08.325844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:27.634 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:27.634 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.634 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.634 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.634 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:27.634 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:27.634 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:27.634 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:27.634 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.634 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.635 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.894 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.894 "name": "raid_bdev1", 00:07:27.894 "uuid": "1e52050b-2b56-4e34-9da8-41237dcccb49", 00:07:27.894 "strip_size_kb": 64, 00:07:27.894 "state": "online", 00:07:27.894 "raid_level": "raid0", 00:07:27.894 "superblock": true, 00:07:27.894 "num_base_bdevs": 2, 00:07:27.894 "num_base_bdevs_discovered": 2, 00:07:27.894 "num_base_bdevs_operational": 2, 00:07:27.894 "base_bdevs_list": [ 00:07:27.894 { 00:07:27.894 "name": "BaseBdev1", 00:07:27.894 "uuid": "36bb8bd8-2cd6-5fba-9fbb-555e1f08984c", 00:07:27.894 "is_configured": true, 00:07:27.894 "data_offset": 2048, 00:07:27.894 "data_size": 63488 00:07:27.894 }, 00:07:27.894 { 00:07:27.894 "name": "BaseBdev2", 00:07:27.894 "uuid": "f6c178de-ca39-558c-ba8b-f4855b49116a", 00:07:27.894 "is_configured": true, 00:07:27.894 "data_offset": 2048, 00:07:27.894 "data_size": 63488 00:07:27.894 } 00:07:27.894 ] 00:07:27.894 }' 00:07:27.894 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.894 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.175 [2024-11-20 13:21:09.713559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:28.175 [2024-11-20 13:21:09.713655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.175 [2024-11-20 13:21:09.716148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.175 [2024-11-20 13:21:09.716238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.175 [2024-11-20 13:21:09.716295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.175 [2024-11-20 13:21:09.716343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:28.175 { 00:07:28.175 "results": [ 00:07:28.175 { 00:07:28.175 "job": "raid_bdev1", 00:07:28.175 "core_mask": "0x1", 00:07:28.175 "workload": "randrw", 00:07:28.175 "percentage": 50, 00:07:28.175 "status": "finished", 00:07:28.175 "queue_depth": 1, 00:07:28.175 "io_size": 131072, 00:07:28.175 "runtime": 1.388592, 00:07:28.175 "iops": 17810.12709276735, 00:07:28.175 "mibps": 2226.265886595919, 00:07:28.175 "io_failed": 1, 00:07:28.175 "io_timeout": 0, 00:07:28.175 "avg_latency_us": 77.57674932040028, 00:07:28.175 "min_latency_us": 24.705676855895195, 00:07:28.175 "max_latency_us": 1395.1441048034935 00:07:28.175 } 00:07:28.175 ], 00:07:28.175 "core_count": 1 00:07:28.175 } 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72491 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72491 ']' 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72491 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72491 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72491' 00:07:28.175 killing process with pid 72491 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72491 00:07:28.175 [2024-11-20 13:21:09.763888] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.175 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72491 00:07:28.175 [2024-11-20 13:21:09.779217] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.463 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2S4XhRjs0R 00:07:28.463 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:28.463 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:28.463 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:28.463 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:28.463 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.463 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.463 13:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:28.463 00:07:28.463 real 0m3.158s 00:07:28.463 user 0m4.026s 00:07:28.463 sys 0m0.487s 00:07:28.463 ************************************ 00:07:28.463 END TEST raid_read_error_test 00:07:28.463 ************************************ 00:07:28.463 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.463 13:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.463 13:21:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:28.463 13:21:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:28.463 13:21:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.463 13:21:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.463 ************************************ 00:07:28.463 START TEST raid_write_error_test 00:07:28.463 ************************************ 00:07:28.463 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:28.463 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vxTUmBf7zW 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72620 00:07:28.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72620 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72620 ']' 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.464 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.723 [2024-11-20 13:21:10.163108] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:28.723 [2024-11-20 13:21:10.163336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72620 ] 00:07:28.723 [2024-11-20 13:21:10.317886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.723 [2024-11-20 13:21:10.342323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.723 [2024-11-20 13:21:10.384286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.723 [2024-11-20 13:21:10.384397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.663 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.663 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:29.663 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.663 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:29.663 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.663 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.663 BaseBdev1_malloc 00:07:29.663 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.663 13:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:29.663 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.663 13:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.663 true 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.663 [2024-11-20 13:21:11.010403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:29.663 [2024-11-20 13:21:11.010453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.663 [2024-11-20 13:21:11.010494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:29.663 [2024-11-20 13:21:11.010503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.663 [2024-11-20 13:21:11.012606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.663 [2024-11-20 13:21:11.012645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:29.663 BaseBdev1 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.663 BaseBdev2_malloc 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.663 true 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.663 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.664 [2024-11-20 13:21:11.050923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:29.664 [2024-11-20 13:21:11.050970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.664 [2024-11-20 13:21:11.050987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:29.664 [2024-11-20 13:21:11.051024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.664 [2024-11-20 13:21:11.053195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.664 [2024-11-20 13:21:11.053274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:29.664 BaseBdev2 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.664 [2024-11-20 13:21:11.062962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.664 [2024-11-20 13:21:11.065002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.664 [2024-11-20 13:21:11.065180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:29.664 [2024-11-20 13:21:11.065193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.664 [2024-11-20 13:21:11.065466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:29.664 [2024-11-20 13:21:11.065598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:29.664 [2024-11-20 13:21:11.065615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:29.664 [2024-11-20 13:21:11.065749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.664 "name": "raid_bdev1", 00:07:29.664 "uuid": "8d044f1d-330b-485e-9bc7-2aa0a238c193", 00:07:29.664 "strip_size_kb": 64, 00:07:29.664 "state": "online", 00:07:29.664 "raid_level": "raid0", 00:07:29.664 "superblock": true, 00:07:29.664 "num_base_bdevs": 2, 00:07:29.664 "num_base_bdevs_discovered": 2, 00:07:29.664 "num_base_bdevs_operational": 2, 00:07:29.664 "base_bdevs_list": [ 00:07:29.664 { 00:07:29.664 "name": "BaseBdev1", 00:07:29.664 "uuid": "5fcb032c-e6a3-5c10-8d14-fa4c6c40cb9c", 00:07:29.664 "is_configured": true, 00:07:29.664 "data_offset": 2048, 00:07:29.664 "data_size": 63488 00:07:29.664 }, 00:07:29.664 { 00:07:29.664 "name": "BaseBdev2", 00:07:29.664 "uuid": "095a2c1f-271d-50f2-9971-9f0cf96fb5e6", 00:07:29.664 "is_configured": true, 00:07:29.664 "data_offset": 2048, 00:07:29.664 "data_size": 63488 00:07:29.664 } 00:07:29.664 ] 00:07:29.664 }' 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.664 13:21:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.924 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:29.924 13:21:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:30.184 [2024-11-20 13:21:11.594406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:31.122 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:31.122 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.123 "name": "raid_bdev1", 00:07:31.123 "uuid": "8d044f1d-330b-485e-9bc7-2aa0a238c193", 00:07:31.123 "strip_size_kb": 64, 00:07:31.123 "state": "online", 00:07:31.123 "raid_level": "raid0", 00:07:31.123 "superblock": true, 00:07:31.123 "num_base_bdevs": 2, 00:07:31.123 "num_base_bdevs_discovered": 2, 00:07:31.123 "num_base_bdevs_operational": 2, 00:07:31.123 "base_bdevs_list": [ 00:07:31.123 { 00:07:31.123 "name": "BaseBdev1", 00:07:31.123 "uuid": "5fcb032c-e6a3-5c10-8d14-fa4c6c40cb9c", 00:07:31.123 "is_configured": true, 00:07:31.123 "data_offset": 2048, 00:07:31.123 "data_size": 63488 00:07:31.123 }, 00:07:31.123 { 00:07:31.123 "name": "BaseBdev2", 00:07:31.123 "uuid": "095a2c1f-271d-50f2-9971-9f0cf96fb5e6", 00:07:31.123 "is_configured": true, 00:07:31.123 "data_offset": 2048, 00:07:31.123 "data_size": 63488 00:07:31.123 } 00:07:31.123 ] 00:07:31.123 }' 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.123 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.382 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.382 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.382 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.382 [2024-11-20 13:21:12.982163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.382 [2024-11-20 13:21:12.982195] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.382 [2024-11-20 13:21:12.984655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.382 [2024-11-20 13:21:12.984688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.382 [2024-11-20 13:21:12.984721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.382 [2024-11-20 13:21:12.984730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:31.382 { 00:07:31.382 "results": [ 00:07:31.382 { 00:07:31.382 "job": "raid_bdev1", 00:07:31.382 "core_mask": "0x1", 00:07:31.382 "workload": "randrw", 00:07:31.382 "percentage": 50, 00:07:31.382 "status": "finished", 00:07:31.382 "queue_depth": 1, 00:07:31.382 "io_size": 131072, 00:07:31.382 "runtime": 1.388561, 00:07:31.383 "iops": 17716.182436349573, 00:07:31.383 "mibps": 2214.5228045436966, 00:07:31.383 "io_failed": 1, 00:07:31.383 "io_timeout": 0, 00:07:31.383 "avg_latency_us": 78.11618294353427, 00:07:31.383 "min_latency_us": 24.817467248908297, 00:07:31.383 "max_latency_us": 1387.989519650655 00:07:31.383 } 00:07:31.383 ], 00:07:31.383 "core_count": 1 00:07:31.383 } 00:07:31.383 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.383 13:21:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72620 00:07:31.383 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72620 ']' 00:07:31.383 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72620 00:07:31.383 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:31.383 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.383 13:21:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72620 00:07:31.383 13:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.383 13:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.383 13:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72620' 00:07:31.383 killing process with pid 72620 00:07:31.383 13:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72620 00:07:31.383 [2024-11-20 13:21:13.029249] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.383 13:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72620 00:07:31.383 [2024-11-20 13:21:13.044533] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.642 13:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vxTUmBf7zW 00:07:31.642 13:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:31.642 13:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:31.642 13:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:31.642 13:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:31.642 13:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.642 13:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:31.642 13:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:31.642 00:07:31.642 real 0m3.193s 00:07:31.642 user 0m4.093s 00:07:31.642 sys 0m0.477s 00:07:31.642 13:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.642 13:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.642 ************************************ 00:07:31.642 END TEST raid_write_error_test 00:07:31.642 ************************************ 00:07:31.901 13:21:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:31.901 13:21:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:31.901 13:21:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:31.901 13:21:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.901 13:21:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.901 ************************************ 00:07:31.901 START TEST raid_state_function_test 00:07:31.901 ************************************ 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:31.901 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72753 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72753' 00:07:31.902 Process raid pid: 72753 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72753 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72753 ']' 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.902 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.902 [2024-11-20 13:21:13.417978] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:31.902 [2024-11-20 13:21:13.418113] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.902 [2024-11-20 13:21:13.549782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.161 [2024-11-20 13:21:13.573699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.161 [2024-11-20 13:21:13.615532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.161 [2024-11-20 13:21:13.615572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.731 [2024-11-20 13:21:14.248501] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.731 [2024-11-20 13:21:14.248557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.731 [2024-11-20 13:21:14.248583] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.731 [2024-11-20 13:21:14.248594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.731 "name": "Existed_Raid", 00:07:32.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.731 "strip_size_kb": 64, 00:07:32.731 "state": "configuring", 00:07:32.731 "raid_level": "concat", 00:07:32.731 "superblock": false, 00:07:32.731 "num_base_bdevs": 2, 00:07:32.731 "num_base_bdevs_discovered": 0, 00:07:32.731 "num_base_bdevs_operational": 2, 00:07:32.731 "base_bdevs_list": [ 00:07:32.731 { 00:07:32.731 "name": "BaseBdev1", 00:07:32.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.731 "is_configured": false, 00:07:32.731 "data_offset": 0, 00:07:32.731 "data_size": 0 00:07:32.731 }, 00:07:32.731 { 00:07:32.731 "name": "BaseBdev2", 00:07:32.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.731 "is_configured": false, 00:07:32.731 "data_offset": 0, 00:07:32.731 "data_size": 0 00:07:32.731 } 00:07:32.731 ] 00:07:32.731 }' 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.731 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.991 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.991 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.991 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.991 [2024-11-20 13:21:14.651750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.991 [2024-11-20 13:21:14.651793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:32.991 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.991 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.991 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.991 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.251 [2024-11-20 13:21:14.659735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:33.251 [2024-11-20 13:21:14.659779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:33.251 [2024-11-20 13:21:14.659788] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.251 [2024-11-20 13:21:14.659808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.251 [2024-11-20 13:21:14.676612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.251 BaseBdev1 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.251 [ 00:07:33.251 { 00:07:33.251 "name": "BaseBdev1", 00:07:33.251 "aliases": [ 00:07:33.251 "9bba51f5-b82a-4636-bfa2-79979058f334" 00:07:33.251 ], 00:07:33.251 "product_name": "Malloc disk", 00:07:33.251 "block_size": 512, 00:07:33.251 "num_blocks": 65536, 00:07:33.251 "uuid": "9bba51f5-b82a-4636-bfa2-79979058f334", 00:07:33.251 "assigned_rate_limits": { 00:07:33.251 "rw_ios_per_sec": 0, 00:07:33.251 "rw_mbytes_per_sec": 0, 00:07:33.251 "r_mbytes_per_sec": 0, 00:07:33.251 "w_mbytes_per_sec": 0 00:07:33.251 }, 00:07:33.251 "claimed": true, 00:07:33.251 "claim_type": "exclusive_write", 00:07:33.251 "zoned": false, 00:07:33.251 "supported_io_types": { 00:07:33.251 "read": true, 00:07:33.251 "write": true, 00:07:33.251 "unmap": true, 00:07:33.251 "flush": true, 00:07:33.251 "reset": true, 00:07:33.251 "nvme_admin": false, 00:07:33.251 "nvme_io": false, 00:07:33.251 "nvme_io_md": false, 00:07:33.251 "write_zeroes": true, 00:07:33.251 "zcopy": true, 00:07:33.251 "get_zone_info": false, 00:07:33.251 "zone_management": false, 00:07:33.251 "zone_append": false, 00:07:33.251 "compare": false, 00:07:33.251 "compare_and_write": false, 00:07:33.251 "abort": true, 00:07:33.251 "seek_hole": false, 00:07:33.251 "seek_data": false, 00:07:33.251 "copy": true, 00:07:33.251 "nvme_iov_md": false 00:07:33.251 }, 00:07:33.251 "memory_domains": [ 00:07:33.251 { 00:07:33.251 "dma_device_id": "system", 00:07:33.251 "dma_device_type": 1 00:07:33.251 }, 00:07:33.251 { 00:07:33.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.251 "dma_device_type": 2 00:07:33.251 } 00:07:33.251 ], 00:07:33.251 "driver_specific": {} 00:07:33.251 } 00:07:33.251 ] 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.251 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.252 "name": "Existed_Raid", 00:07:33.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.252 "strip_size_kb": 64, 00:07:33.252 "state": "configuring", 00:07:33.252 "raid_level": "concat", 00:07:33.252 "superblock": false, 00:07:33.252 "num_base_bdevs": 2, 00:07:33.252 "num_base_bdevs_discovered": 1, 00:07:33.252 "num_base_bdevs_operational": 2, 00:07:33.252 "base_bdevs_list": [ 00:07:33.252 { 00:07:33.252 "name": "BaseBdev1", 00:07:33.252 "uuid": "9bba51f5-b82a-4636-bfa2-79979058f334", 00:07:33.252 "is_configured": true, 00:07:33.252 "data_offset": 0, 00:07:33.252 "data_size": 65536 00:07:33.252 }, 00:07:33.252 { 00:07:33.252 "name": "BaseBdev2", 00:07:33.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.252 "is_configured": false, 00:07:33.252 "data_offset": 0, 00:07:33.252 "data_size": 0 00:07:33.252 } 00:07:33.252 ] 00:07:33.252 }' 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.252 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.512 [2024-11-20 13:21:15.147823] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.512 [2024-11-20 13:21:15.147864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.512 [2024-11-20 13:21:15.159842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.512 [2024-11-20 13:21:15.161677] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.512 [2024-11-20 13:21:15.161755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.512 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.772 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.772 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.772 "name": "Existed_Raid", 00:07:33.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.772 "strip_size_kb": 64, 00:07:33.772 "state": "configuring", 00:07:33.772 "raid_level": "concat", 00:07:33.772 "superblock": false, 00:07:33.772 "num_base_bdevs": 2, 00:07:33.772 "num_base_bdevs_discovered": 1, 00:07:33.772 "num_base_bdevs_operational": 2, 00:07:33.772 "base_bdevs_list": [ 00:07:33.772 { 00:07:33.772 "name": "BaseBdev1", 00:07:33.772 "uuid": "9bba51f5-b82a-4636-bfa2-79979058f334", 00:07:33.772 "is_configured": true, 00:07:33.772 "data_offset": 0, 00:07:33.772 "data_size": 65536 00:07:33.772 }, 00:07:33.772 { 00:07:33.772 "name": "BaseBdev2", 00:07:33.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.772 "is_configured": false, 00:07:33.772 "data_offset": 0, 00:07:33.772 "data_size": 0 00:07:33.772 } 00:07:33.772 ] 00:07:33.772 }' 00:07:33.772 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.772 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.032 [2024-11-20 13:21:15.578193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:34.032 [2024-11-20 13:21:15.578303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:34.032 [2024-11-20 13:21:15.578330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:34.032 [2024-11-20 13:21:15.578647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:34.032 [2024-11-20 13:21:15.578837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:34.032 [2024-11-20 13:21:15.578890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:34.032 [2024-11-20 13:21:15.579159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.032 BaseBdev2 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.032 [ 00:07:34.032 { 00:07:34.032 "name": "BaseBdev2", 00:07:34.032 "aliases": [ 00:07:34.032 "343dbf99-8cf1-474c-a4d8-210ce2c8d258" 00:07:34.032 ], 00:07:34.032 "product_name": "Malloc disk", 00:07:34.032 "block_size": 512, 00:07:34.032 "num_blocks": 65536, 00:07:34.032 "uuid": "343dbf99-8cf1-474c-a4d8-210ce2c8d258", 00:07:34.032 "assigned_rate_limits": { 00:07:34.032 "rw_ios_per_sec": 0, 00:07:34.032 "rw_mbytes_per_sec": 0, 00:07:34.032 "r_mbytes_per_sec": 0, 00:07:34.032 "w_mbytes_per_sec": 0 00:07:34.032 }, 00:07:34.032 "claimed": true, 00:07:34.032 "claim_type": "exclusive_write", 00:07:34.032 "zoned": false, 00:07:34.032 "supported_io_types": { 00:07:34.032 "read": true, 00:07:34.032 "write": true, 00:07:34.032 "unmap": true, 00:07:34.032 "flush": true, 00:07:34.032 "reset": true, 00:07:34.032 "nvme_admin": false, 00:07:34.032 "nvme_io": false, 00:07:34.032 "nvme_io_md": false, 00:07:34.032 "write_zeroes": true, 00:07:34.032 "zcopy": true, 00:07:34.032 "get_zone_info": false, 00:07:34.032 "zone_management": false, 00:07:34.032 "zone_append": false, 00:07:34.032 "compare": false, 00:07:34.032 "compare_and_write": false, 00:07:34.032 "abort": true, 00:07:34.032 "seek_hole": false, 00:07:34.032 "seek_data": false, 00:07:34.032 "copy": true, 00:07:34.032 "nvme_iov_md": false 00:07:34.032 }, 00:07:34.032 "memory_domains": [ 00:07:34.032 { 00:07:34.032 "dma_device_id": "system", 00:07:34.032 "dma_device_type": 1 00:07:34.032 }, 00:07:34.032 { 00:07:34.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.032 "dma_device_type": 2 00:07:34.032 } 00:07:34.032 ], 00:07:34.032 "driver_specific": {} 00:07:34.032 } 00:07:34.032 ] 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.032 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.033 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.033 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.033 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.033 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.033 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.033 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.033 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.033 "name": "Existed_Raid", 00:07:34.033 "uuid": "e57eba45-1f73-4942-a1e7-eb2e4cb69b80", 00:07:34.033 "strip_size_kb": 64, 00:07:34.033 "state": "online", 00:07:34.033 "raid_level": "concat", 00:07:34.033 "superblock": false, 00:07:34.033 "num_base_bdevs": 2, 00:07:34.033 "num_base_bdevs_discovered": 2, 00:07:34.033 "num_base_bdevs_operational": 2, 00:07:34.033 "base_bdevs_list": [ 00:07:34.033 { 00:07:34.033 "name": "BaseBdev1", 00:07:34.033 "uuid": "9bba51f5-b82a-4636-bfa2-79979058f334", 00:07:34.033 "is_configured": true, 00:07:34.033 "data_offset": 0, 00:07:34.033 "data_size": 65536 00:07:34.033 }, 00:07:34.033 { 00:07:34.033 "name": "BaseBdev2", 00:07:34.033 "uuid": "343dbf99-8cf1-474c-a4d8-210ce2c8d258", 00:07:34.033 "is_configured": true, 00:07:34.033 "data_offset": 0, 00:07:34.033 "data_size": 65536 00:07:34.033 } 00:07:34.033 ] 00:07:34.033 }' 00:07:34.033 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.033 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.600 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.600 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.600 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.600 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.600 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.600 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.600 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.600 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.600 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.600 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.600 [2024-11-20 13:21:15.993788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.600 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.600 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.600 "name": "Existed_Raid", 00:07:34.600 "aliases": [ 00:07:34.600 "e57eba45-1f73-4942-a1e7-eb2e4cb69b80" 00:07:34.600 ], 00:07:34.600 "product_name": "Raid Volume", 00:07:34.600 "block_size": 512, 00:07:34.600 "num_blocks": 131072, 00:07:34.600 "uuid": "e57eba45-1f73-4942-a1e7-eb2e4cb69b80", 00:07:34.600 "assigned_rate_limits": { 00:07:34.600 "rw_ios_per_sec": 0, 00:07:34.600 "rw_mbytes_per_sec": 0, 00:07:34.600 "r_mbytes_per_sec": 0, 00:07:34.600 "w_mbytes_per_sec": 0 00:07:34.600 }, 00:07:34.600 "claimed": false, 00:07:34.600 "zoned": false, 00:07:34.600 "supported_io_types": { 00:07:34.600 "read": true, 00:07:34.600 "write": true, 00:07:34.600 "unmap": true, 00:07:34.600 "flush": true, 00:07:34.600 "reset": true, 00:07:34.600 "nvme_admin": false, 00:07:34.600 "nvme_io": false, 00:07:34.600 "nvme_io_md": false, 00:07:34.600 "write_zeroes": true, 00:07:34.600 "zcopy": false, 00:07:34.600 "get_zone_info": false, 00:07:34.600 "zone_management": false, 00:07:34.600 "zone_append": false, 00:07:34.600 "compare": false, 00:07:34.600 "compare_and_write": false, 00:07:34.600 "abort": false, 00:07:34.600 "seek_hole": false, 00:07:34.600 "seek_data": false, 00:07:34.600 "copy": false, 00:07:34.600 "nvme_iov_md": false 00:07:34.600 }, 00:07:34.600 "memory_domains": [ 00:07:34.600 { 00:07:34.600 "dma_device_id": "system", 00:07:34.600 "dma_device_type": 1 00:07:34.600 }, 00:07:34.600 { 00:07:34.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.600 "dma_device_type": 2 00:07:34.600 }, 00:07:34.600 { 00:07:34.600 "dma_device_id": "system", 00:07:34.600 "dma_device_type": 1 00:07:34.600 }, 00:07:34.600 { 00:07:34.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.600 "dma_device_type": 2 00:07:34.600 } 00:07:34.600 ], 00:07:34.600 "driver_specific": { 00:07:34.600 "raid": { 00:07:34.600 "uuid": "e57eba45-1f73-4942-a1e7-eb2e4cb69b80", 00:07:34.600 "strip_size_kb": 64, 00:07:34.600 "state": "online", 00:07:34.600 "raid_level": "concat", 00:07:34.600 "superblock": false, 00:07:34.600 "num_base_bdevs": 2, 00:07:34.600 "num_base_bdevs_discovered": 2, 00:07:34.600 "num_base_bdevs_operational": 2, 00:07:34.600 "base_bdevs_list": [ 00:07:34.600 { 00:07:34.600 "name": "BaseBdev1", 00:07:34.600 "uuid": "9bba51f5-b82a-4636-bfa2-79979058f334", 00:07:34.600 "is_configured": true, 00:07:34.600 "data_offset": 0, 00:07:34.600 "data_size": 65536 00:07:34.600 }, 00:07:34.600 { 00:07:34.600 "name": "BaseBdev2", 00:07:34.600 "uuid": "343dbf99-8cf1-474c-a4d8-210ce2c8d258", 00:07:34.600 "is_configured": true, 00:07:34.600 "data_offset": 0, 00:07:34.600 "data_size": 65536 00:07:34.600 } 00:07:34.600 ] 00:07:34.600 } 00:07:34.600 } 00:07:34.601 }' 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:34.601 BaseBdev2' 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.601 [2024-11-20 13:21:16.217181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.601 [2024-11-20 13:21:16.217211] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.601 [2024-11-20 13:21:16.217273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.601 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.601 "name": "Existed_Raid", 00:07:34.601 "uuid": "e57eba45-1f73-4942-a1e7-eb2e4cb69b80", 00:07:34.601 "strip_size_kb": 64, 00:07:34.601 "state": "offline", 00:07:34.601 "raid_level": "concat", 00:07:34.601 "superblock": false, 00:07:34.601 "num_base_bdevs": 2, 00:07:34.601 "num_base_bdevs_discovered": 1, 00:07:34.601 "num_base_bdevs_operational": 1, 00:07:34.601 "base_bdevs_list": [ 00:07:34.601 { 00:07:34.601 "name": null, 00:07:34.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.601 "is_configured": false, 00:07:34.601 "data_offset": 0, 00:07:34.601 "data_size": 65536 00:07:34.601 }, 00:07:34.601 { 00:07:34.601 "name": "BaseBdev2", 00:07:34.601 "uuid": "343dbf99-8cf1-474c-a4d8-210ce2c8d258", 00:07:34.601 "is_configured": true, 00:07:34.601 "data_offset": 0, 00:07:34.601 "data_size": 65536 00:07:34.601 } 00:07:34.601 ] 00:07:34.601 }' 00:07:34.859 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.859 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.117 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:35.117 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.117 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.117 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.117 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.117 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:35.117 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.118 [2024-11-20 13:21:16.687715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:35.118 [2024-11-20 13:21:16.687812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72753 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72753 ']' 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72753 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.118 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72753 00:07:35.376 killing process with pid 72753 00:07:35.376 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.376 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.376 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72753' 00:07:35.376 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72753 00:07:35.376 [2024-11-20 13:21:16.791592] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.376 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72753 00:07:35.376 [2024-11-20 13:21:16.792551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.376 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:35.376 00:07:35.376 real 0m3.674s 00:07:35.376 user 0m5.809s 00:07:35.376 sys 0m0.723s 00:07:35.376 13:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.376 ************************************ 00:07:35.376 END TEST raid_state_function_test 00:07:35.376 ************************************ 00:07:35.376 13:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.634 13:21:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:35.634 13:21:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:35.634 13:21:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.634 13:21:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.634 ************************************ 00:07:35.634 START TEST raid_state_function_test_sb 00:07:35.634 ************************************ 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72989 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72989' 00:07:35.634 Process raid pid: 72989 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72989 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72989 ']' 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.634 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.635 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.635 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.635 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.635 [2024-11-20 13:21:17.168537] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:35.635 [2024-11-20 13:21:17.168736] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.893 [2024-11-20 13:21:17.325366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.893 [2024-11-20 13:21:17.350828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.893 [2024-11-20 13:21:17.393422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.893 [2024-11-20 13:21:17.393537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.460 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.460 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:36.460 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.460 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.460 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.460 [2024-11-20 13:21:18.002815] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.460 [2024-11-20 13:21:18.002876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.460 [2024-11-20 13:21:18.002886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.460 [2024-11-20 13:21:18.003002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.460 "name": "Existed_Raid", 00:07:36.460 "uuid": "9fd3fe7e-4253-47ef-8f9f-6db8afa0b829", 00:07:36.460 "strip_size_kb": 64, 00:07:36.460 "state": "configuring", 00:07:36.460 "raid_level": "concat", 00:07:36.460 "superblock": true, 00:07:36.460 "num_base_bdevs": 2, 00:07:36.460 "num_base_bdevs_discovered": 0, 00:07:36.460 "num_base_bdevs_operational": 2, 00:07:36.460 "base_bdevs_list": [ 00:07:36.460 { 00:07:36.460 "name": "BaseBdev1", 00:07:36.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.460 "is_configured": false, 00:07:36.460 "data_offset": 0, 00:07:36.460 "data_size": 0 00:07:36.460 }, 00:07:36.460 { 00:07:36.460 "name": "BaseBdev2", 00:07:36.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.460 "is_configured": false, 00:07:36.460 "data_offset": 0, 00:07:36.460 "data_size": 0 00:07:36.460 } 00:07:36.460 ] 00:07:36.460 }' 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.460 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.719 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.719 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.719 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.719 [2024-11-20 13:21:18.378083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.719 [2024-11-20 13:21:18.378172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:36.719 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.719 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.719 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.719 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.719 [2024-11-20 13:21:18.386085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.719 [2024-11-20 13:21:18.386173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.719 [2024-11-20 13:21:18.386200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.719 [2024-11-20 13:21:18.386237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.978 [2024-11-20 13:21:18.402971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.978 BaseBdev1 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.978 [ 00:07:36.978 { 00:07:36.978 "name": "BaseBdev1", 00:07:36.978 "aliases": [ 00:07:36.978 "cc38cbf0-a589-490d-9ee0-763cd792f1ce" 00:07:36.978 ], 00:07:36.978 "product_name": "Malloc disk", 00:07:36.978 "block_size": 512, 00:07:36.978 "num_blocks": 65536, 00:07:36.978 "uuid": "cc38cbf0-a589-490d-9ee0-763cd792f1ce", 00:07:36.978 "assigned_rate_limits": { 00:07:36.978 "rw_ios_per_sec": 0, 00:07:36.978 "rw_mbytes_per_sec": 0, 00:07:36.978 "r_mbytes_per_sec": 0, 00:07:36.978 "w_mbytes_per_sec": 0 00:07:36.978 }, 00:07:36.978 "claimed": true, 00:07:36.978 "claim_type": "exclusive_write", 00:07:36.978 "zoned": false, 00:07:36.978 "supported_io_types": { 00:07:36.978 "read": true, 00:07:36.978 "write": true, 00:07:36.978 "unmap": true, 00:07:36.978 "flush": true, 00:07:36.978 "reset": true, 00:07:36.978 "nvme_admin": false, 00:07:36.978 "nvme_io": false, 00:07:36.978 "nvme_io_md": false, 00:07:36.978 "write_zeroes": true, 00:07:36.978 "zcopy": true, 00:07:36.978 "get_zone_info": false, 00:07:36.978 "zone_management": false, 00:07:36.978 "zone_append": false, 00:07:36.978 "compare": false, 00:07:36.978 "compare_and_write": false, 00:07:36.978 "abort": true, 00:07:36.978 "seek_hole": false, 00:07:36.978 "seek_data": false, 00:07:36.978 "copy": true, 00:07:36.978 "nvme_iov_md": false 00:07:36.978 }, 00:07:36.978 "memory_domains": [ 00:07:36.978 { 00:07:36.978 "dma_device_id": "system", 00:07:36.978 "dma_device_type": 1 00:07:36.978 }, 00:07:36.978 { 00:07:36.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.978 "dma_device_type": 2 00:07:36.978 } 00:07:36.978 ], 00:07:36.978 "driver_specific": {} 00:07:36.978 } 00:07:36.978 ] 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.978 "name": "Existed_Raid", 00:07:36.978 "uuid": "d1286e82-c040-4172-8119-8dff2ac18c98", 00:07:36.978 "strip_size_kb": 64, 00:07:36.978 "state": "configuring", 00:07:36.978 "raid_level": "concat", 00:07:36.978 "superblock": true, 00:07:36.978 "num_base_bdevs": 2, 00:07:36.978 "num_base_bdevs_discovered": 1, 00:07:36.978 "num_base_bdevs_operational": 2, 00:07:36.978 "base_bdevs_list": [ 00:07:36.978 { 00:07:36.978 "name": "BaseBdev1", 00:07:36.978 "uuid": "cc38cbf0-a589-490d-9ee0-763cd792f1ce", 00:07:36.978 "is_configured": true, 00:07:36.978 "data_offset": 2048, 00:07:36.978 "data_size": 63488 00:07:36.978 }, 00:07:36.978 { 00:07:36.978 "name": "BaseBdev2", 00:07:36.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.978 "is_configured": false, 00:07:36.978 "data_offset": 0, 00:07:36.978 "data_size": 0 00:07:36.978 } 00:07:36.978 ] 00:07:36.978 }' 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.978 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.237 [2024-11-20 13:21:18.878194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.237 [2024-11-20 13:21:18.878240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.237 [2024-11-20 13:21:18.890206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.237 [2024-11-20 13:21:18.892024] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.237 [2024-11-20 13:21:18.892065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.237 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.496 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.496 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.496 "name": "Existed_Raid", 00:07:37.496 "uuid": "046b5d55-b7b9-4349-8df8-d4a5aed381fa", 00:07:37.496 "strip_size_kb": 64, 00:07:37.496 "state": "configuring", 00:07:37.496 "raid_level": "concat", 00:07:37.496 "superblock": true, 00:07:37.496 "num_base_bdevs": 2, 00:07:37.496 "num_base_bdevs_discovered": 1, 00:07:37.496 "num_base_bdevs_operational": 2, 00:07:37.496 "base_bdevs_list": [ 00:07:37.496 { 00:07:37.496 "name": "BaseBdev1", 00:07:37.496 "uuid": "cc38cbf0-a589-490d-9ee0-763cd792f1ce", 00:07:37.496 "is_configured": true, 00:07:37.496 "data_offset": 2048, 00:07:37.496 "data_size": 63488 00:07:37.496 }, 00:07:37.496 { 00:07:37.496 "name": "BaseBdev2", 00:07:37.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.496 "is_configured": false, 00:07:37.496 "data_offset": 0, 00:07:37.496 "data_size": 0 00:07:37.496 } 00:07:37.496 ] 00:07:37.496 }' 00:07:37.496 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.496 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.756 [2024-11-20 13:21:19.332587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.756 [2024-11-20 13:21:19.332886] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:37.756 [2024-11-20 13:21:19.332942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.756 BaseBdev2 00:07:37.756 [2024-11-20 13:21:19.333263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:37.756 [2024-11-20 13:21:19.333476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:37.756 [2024-11-20 13:21:19.333524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:37.756 [2024-11-20 13:21:19.333697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.756 [ 00:07:37.756 { 00:07:37.756 "name": "BaseBdev2", 00:07:37.756 "aliases": [ 00:07:37.756 "399ef2d8-3150-48b2-8156-1bd4a2e94a0d" 00:07:37.756 ], 00:07:37.756 "product_name": "Malloc disk", 00:07:37.756 "block_size": 512, 00:07:37.756 "num_blocks": 65536, 00:07:37.756 "uuid": "399ef2d8-3150-48b2-8156-1bd4a2e94a0d", 00:07:37.756 "assigned_rate_limits": { 00:07:37.756 "rw_ios_per_sec": 0, 00:07:37.756 "rw_mbytes_per_sec": 0, 00:07:37.756 "r_mbytes_per_sec": 0, 00:07:37.756 "w_mbytes_per_sec": 0 00:07:37.756 }, 00:07:37.756 "claimed": true, 00:07:37.756 "claim_type": "exclusive_write", 00:07:37.756 "zoned": false, 00:07:37.756 "supported_io_types": { 00:07:37.756 "read": true, 00:07:37.756 "write": true, 00:07:37.756 "unmap": true, 00:07:37.756 "flush": true, 00:07:37.756 "reset": true, 00:07:37.756 "nvme_admin": false, 00:07:37.756 "nvme_io": false, 00:07:37.756 "nvme_io_md": false, 00:07:37.756 "write_zeroes": true, 00:07:37.756 "zcopy": true, 00:07:37.756 "get_zone_info": false, 00:07:37.756 "zone_management": false, 00:07:37.756 "zone_append": false, 00:07:37.756 "compare": false, 00:07:37.756 "compare_and_write": false, 00:07:37.756 "abort": true, 00:07:37.756 "seek_hole": false, 00:07:37.756 "seek_data": false, 00:07:37.756 "copy": true, 00:07:37.756 "nvme_iov_md": false 00:07:37.756 }, 00:07:37.756 "memory_domains": [ 00:07:37.756 { 00:07:37.756 "dma_device_id": "system", 00:07:37.756 "dma_device_type": 1 00:07:37.756 }, 00:07:37.756 { 00:07:37.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.756 "dma_device_type": 2 00:07:37.756 } 00:07:37.756 ], 00:07:37.756 "driver_specific": {} 00:07:37.756 } 00:07:37.756 ] 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.756 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.757 "name": "Existed_Raid", 00:07:37.757 "uuid": "046b5d55-b7b9-4349-8df8-d4a5aed381fa", 00:07:37.757 "strip_size_kb": 64, 00:07:37.757 "state": "online", 00:07:37.757 "raid_level": "concat", 00:07:37.757 "superblock": true, 00:07:37.757 "num_base_bdevs": 2, 00:07:37.757 "num_base_bdevs_discovered": 2, 00:07:37.757 "num_base_bdevs_operational": 2, 00:07:37.757 "base_bdevs_list": [ 00:07:37.757 { 00:07:37.757 "name": "BaseBdev1", 00:07:37.757 "uuid": "cc38cbf0-a589-490d-9ee0-763cd792f1ce", 00:07:37.757 "is_configured": true, 00:07:37.757 "data_offset": 2048, 00:07:37.757 "data_size": 63488 00:07:37.757 }, 00:07:37.757 { 00:07:37.757 "name": "BaseBdev2", 00:07:37.757 "uuid": "399ef2d8-3150-48b2-8156-1bd4a2e94a0d", 00:07:37.757 "is_configured": true, 00:07:37.757 "data_offset": 2048, 00:07:37.757 "data_size": 63488 00:07:37.757 } 00:07:37.757 ] 00:07:37.757 }' 00:07:37.757 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.757 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.324 [2024-11-20 13:21:19.824086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.324 "name": "Existed_Raid", 00:07:38.324 "aliases": [ 00:07:38.324 "046b5d55-b7b9-4349-8df8-d4a5aed381fa" 00:07:38.324 ], 00:07:38.324 "product_name": "Raid Volume", 00:07:38.324 "block_size": 512, 00:07:38.324 "num_blocks": 126976, 00:07:38.324 "uuid": "046b5d55-b7b9-4349-8df8-d4a5aed381fa", 00:07:38.324 "assigned_rate_limits": { 00:07:38.324 "rw_ios_per_sec": 0, 00:07:38.324 "rw_mbytes_per_sec": 0, 00:07:38.324 "r_mbytes_per_sec": 0, 00:07:38.324 "w_mbytes_per_sec": 0 00:07:38.324 }, 00:07:38.324 "claimed": false, 00:07:38.324 "zoned": false, 00:07:38.324 "supported_io_types": { 00:07:38.324 "read": true, 00:07:38.324 "write": true, 00:07:38.324 "unmap": true, 00:07:38.324 "flush": true, 00:07:38.324 "reset": true, 00:07:38.324 "nvme_admin": false, 00:07:38.324 "nvme_io": false, 00:07:38.324 "nvme_io_md": false, 00:07:38.324 "write_zeroes": true, 00:07:38.324 "zcopy": false, 00:07:38.324 "get_zone_info": false, 00:07:38.324 "zone_management": false, 00:07:38.324 "zone_append": false, 00:07:38.324 "compare": false, 00:07:38.324 "compare_and_write": false, 00:07:38.324 "abort": false, 00:07:38.324 "seek_hole": false, 00:07:38.324 "seek_data": false, 00:07:38.324 "copy": false, 00:07:38.324 "nvme_iov_md": false 00:07:38.324 }, 00:07:38.324 "memory_domains": [ 00:07:38.324 { 00:07:38.324 "dma_device_id": "system", 00:07:38.324 "dma_device_type": 1 00:07:38.324 }, 00:07:38.324 { 00:07:38.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.324 "dma_device_type": 2 00:07:38.324 }, 00:07:38.324 { 00:07:38.324 "dma_device_id": "system", 00:07:38.324 "dma_device_type": 1 00:07:38.324 }, 00:07:38.324 { 00:07:38.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.324 "dma_device_type": 2 00:07:38.324 } 00:07:38.324 ], 00:07:38.324 "driver_specific": { 00:07:38.324 "raid": { 00:07:38.324 "uuid": "046b5d55-b7b9-4349-8df8-d4a5aed381fa", 00:07:38.324 "strip_size_kb": 64, 00:07:38.324 "state": "online", 00:07:38.324 "raid_level": "concat", 00:07:38.324 "superblock": true, 00:07:38.324 "num_base_bdevs": 2, 00:07:38.324 "num_base_bdevs_discovered": 2, 00:07:38.324 "num_base_bdevs_operational": 2, 00:07:38.324 "base_bdevs_list": [ 00:07:38.324 { 00:07:38.324 "name": "BaseBdev1", 00:07:38.324 "uuid": "cc38cbf0-a589-490d-9ee0-763cd792f1ce", 00:07:38.324 "is_configured": true, 00:07:38.324 "data_offset": 2048, 00:07:38.324 "data_size": 63488 00:07:38.324 }, 00:07:38.324 { 00:07:38.324 "name": "BaseBdev2", 00:07:38.324 "uuid": "399ef2d8-3150-48b2-8156-1bd4a2e94a0d", 00:07:38.324 "is_configured": true, 00:07:38.324 "data_offset": 2048, 00:07:38.324 "data_size": 63488 00:07:38.324 } 00:07:38.324 ] 00:07:38.324 } 00:07:38.324 } 00:07:38.324 }' 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.324 BaseBdev2' 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.324 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.325 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.325 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.325 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.325 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.325 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.325 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.325 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.325 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.325 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.583 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.583 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.583 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.583 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.583 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.583 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.583 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.583 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.583 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.584 [2024-11-20 13:21:20.023504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.584 [2024-11-20 13:21:20.023533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.584 [2024-11-20 13:21:20.023590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.584 "name": "Existed_Raid", 00:07:38.584 "uuid": "046b5d55-b7b9-4349-8df8-d4a5aed381fa", 00:07:38.584 "strip_size_kb": 64, 00:07:38.584 "state": "offline", 00:07:38.584 "raid_level": "concat", 00:07:38.584 "superblock": true, 00:07:38.584 "num_base_bdevs": 2, 00:07:38.584 "num_base_bdevs_discovered": 1, 00:07:38.584 "num_base_bdevs_operational": 1, 00:07:38.584 "base_bdevs_list": [ 00:07:38.584 { 00:07:38.584 "name": null, 00:07:38.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.584 "is_configured": false, 00:07:38.584 "data_offset": 0, 00:07:38.584 "data_size": 63488 00:07:38.584 }, 00:07:38.584 { 00:07:38.584 "name": "BaseBdev2", 00:07:38.584 "uuid": "399ef2d8-3150-48b2-8156-1bd4a2e94a0d", 00:07:38.584 "is_configured": true, 00:07:38.584 "data_offset": 2048, 00:07:38.584 "data_size": 63488 00:07:38.584 } 00:07:38.584 ] 00:07:38.584 }' 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.584 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.842 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:38.842 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.842 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:38.842 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.842 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.842 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.842 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.101 [2024-11-20 13:21:20.517920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.101 [2024-11-20 13:21:20.518031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72989 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72989 ']' 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72989 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72989 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72989' 00:07:39.101 killing process with pid 72989 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72989 00:07:39.101 [2024-11-20 13:21:20.625444] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.101 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72989 00:07:39.101 [2024-11-20 13:21:20.626497] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.359 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:39.359 00:07:39.359 real 0m3.751s 00:07:39.359 user 0m5.946s 00:07:39.359 sys 0m0.748s 00:07:39.359 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.359 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.360 ************************************ 00:07:39.360 END TEST raid_state_function_test_sb 00:07:39.360 ************************************ 00:07:39.360 13:21:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:39.360 13:21:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:39.360 13:21:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.360 13:21:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.360 ************************************ 00:07:39.360 START TEST raid_superblock_test 00:07:39.360 ************************************ 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73230 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73230 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73230 ']' 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.360 13:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.360 [2024-11-20 13:21:20.988444] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:39.360 [2024-11-20 13:21:20.988689] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73230 ] 00:07:39.618 [2024-11-20 13:21:21.146141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.618 [2024-11-20 13:21:21.173347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.618 [2024-11-20 13:21:21.216213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.618 [2024-11-20 13:21:21.216339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 malloc1 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.186 [2024-11-20 13:21:21.831049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:40.186 [2024-11-20 13:21:21.831163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.186 [2024-11-20 13:21:21.831207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:40.186 [2024-11-20 13:21:21.831250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.186 [2024-11-20 13:21:21.833422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.186 [2024-11-20 13:21:21.833497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:40.186 pt1 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:40.186 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:40.187 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:40.187 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:40.187 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:40.187 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:40.187 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:40.187 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:40.187 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.187 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.187 malloc2 00:07:40.187 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.187 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.446 [2024-11-20 13:21:21.859911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:40.446 [2024-11-20 13:21:21.860025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.446 [2024-11-20 13:21:21.860070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:40.446 [2024-11-20 13:21:21.860105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.446 [2024-11-20 13:21:21.862269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.446 [2024-11-20 13:21:21.862343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:40.446 pt2 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.446 [2024-11-20 13:21:21.871938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:40.446 [2024-11-20 13:21:21.873936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:40.446 [2024-11-20 13:21:21.874133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:40.446 [2024-11-20 13:21:21.874188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.446 [2024-11-20 13:21:21.874488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:40.446 [2024-11-20 13:21:21.874668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:40.446 [2024-11-20 13:21:21.874715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:40.446 [2024-11-20 13:21:21.874879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.446 "name": "raid_bdev1", 00:07:40.446 "uuid": "64cbe9ff-edea-4f40-9e84-302f07254c2e", 00:07:40.446 "strip_size_kb": 64, 00:07:40.446 "state": "online", 00:07:40.446 "raid_level": "concat", 00:07:40.446 "superblock": true, 00:07:40.446 "num_base_bdevs": 2, 00:07:40.446 "num_base_bdevs_discovered": 2, 00:07:40.446 "num_base_bdevs_operational": 2, 00:07:40.446 "base_bdevs_list": [ 00:07:40.446 { 00:07:40.446 "name": "pt1", 00:07:40.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.446 "is_configured": true, 00:07:40.446 "data_offset": 2048, 00:07:40.446 "data_size": 63488 00:07:40.446 }, 00:07:40.446 { 00:07:40.446 "name": "pt2", 00:07:40.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.446 "is_configured": true, 00:07:40.446 "data_offset": 2048, 00:07:40.446 "data_size": 63488 00:07:40.446 } 00:07:40.446 ] 00:07:40.446 }' 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.446 13:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.705 [2024-11-20 13:21:22.283529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.705 "name": "raid_bdev1", 00:07:40.705 "aliases": [ 00:07:40.705 "64cbe9ff-edea-4f40-9e84-302f07254c2e" 00:07:40.705 ], 00:07:40.705 "product_name": "Raid Volume", 00:07:40.705 "block_size": 512, 00:07:40.705 "num_blocks": 126976, 00:07:40.705 "uuid": "64cbe9ff-edea-4f40-9e84-302f07254c2e", 00:07:40.705 "assigned_rate_limits": { 00:07:40.705 "rw_ios_per_sec": 0, 00:07:40.705 "rw_mbytes_per_sec": 0, 00:07:40.705 "r_mbytes_per_sec": 0, 00:07:40.705 "w_mbytes_per_sec": 0 00:07:40.705 }, 00:07:40.705 "claimed": false, 00:07:40.705 "zoned": false, 00:07:40.705 "supported_io_types": { 00:07:40.705 "read": true, 00:07:40.705 "write": true, 00:07:40.705 "unmap": true, 00:07:40.705 "flush": true, 00:07:40.705 "reset": true, 00:07:40.705 "nvme_admin": false, 00:07:40.705 "nvme_io": false, 00:07:40.705 "nvme_io_md": false, 00:07:40.705 "write_zeroes": true, 00:07:40.705 "zcopy": false, 00:07:40.705 "get_zone_info": false, 00:07:40.705 "zone_management": false, 00:07:40.705 "zone_append": false, 00:07:40.705 "compare": false, 00:07:40.705 "compare_and_write": false, 00:07:40.705 "abort": false, 00:07:40.705 "seek_hole": false, 00:07:40.705 "seek_data": false, 00:07:40.705 "copy": false, 00:07:40.705 "nvme_iov_md": false 00:07:40.705 }, 00:07:40.705 "memory_domains": [ 00:07:40.705 { 00:07:40.705 "dma_device_id": "system", 00:07:40.705 "dma_device_type": 1 00:07:40.705 }, 00:07:40.705 { 00:07:40.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.705 "dma_device_type": 2 00:07:40.705 }, 00:07:40.705 { 00:07:40.705 "dma_device_id": "system", 00:07:40.705 "dma_device_type": 1 00:07:40.705 }, 00:07:40.705 { 00:07:40.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.705 "dma_device_type": 2 00:07:40.705 } 00:07:40.705 ], 00:07:40.705 "driver_specific": { 00:07:40.705 "raid": { 00:07:40.705 "uuid": "64cbe9ff-edea-4f40-9e84-302f07254c2e", 00:07:40.705 "strip_size_kb": 64, 00:07:40.705 "state": "online", 00:07:40.705 "raid_level": "concat", 00:07:40.705 "superblock": true, 00:07:40.705 "num_base_bdevs": 2, 00:07:40.705 "num_base_bdevs_discovered": 2, 00:07:40.705 "num_base_bdevs_operational": 2, 00:07:40.705 "base_bdevs_list": [ 00:07:40.705 { 00:07:40.705 "name": "pt1", 00:07:40.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.705 "is_configured": true, 00:07:40.705 "data_offset": 2048, 00:07:40.705 "data_size": 63488 00:07:40.705 }, 00:07:40.705 { 00:07:40.705 "name": "pt2", 00:07:40.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.705 "is_configured": true, 00:07:40.705 "data_offset": 2048, 00:07:40.705 "data_size": 63488 00:07:40.705 } 00:07:40.705 ] 00:07:40.705 } 00:07:40.705 } 00:07:40.705 }' 00:07:40.705 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:40.965 pt2' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.965 [2024-11-20 13:21:22.503061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=64cbe9ff-edea-4f40-9e84-302f07254c2e 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 64cbe9ff-edea-4f40-9e84-302f07254c2e ']' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.965 [2024-11-20 13:21:22.546739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:40.965 [2024-11-20 13:21:22.546807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.965 [2024-11-20 13:21:22.546893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.965 [2024-11-20 13:21:22.546972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.965 [2024-11-20 13:21:22.547095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.965 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.225 [2024-11-20 13:21:22.658567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:41.225 [2024-11-20 13:21:22.660463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:41.225 [2024-11-20 13:21:22.660527] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:41.225 [2024-11-20 13:21:22.660568] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:41.225 [2024-11-20 13:21:22.660583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.225 [2024-11-20 13:21:22.660591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:41.225 request: 00:07:41.225 { 00:07:41.225 "name": "raid_bdev1", 00:07:41.225 "raid_level": "concat", 00:07:41.225 "base_bdevs": [ 00:07:41.225 "malloc1", 00:07:41.225 "malloc2" 00:07:41.225 ], 00:07:41.225 "strip_size_kb": 64, 00:07:41.225 "superblock": false, 00:07:41.225 "method": "bdev_raid_create", 00:07:41.225 "req_id": 1 00:07:41.225 } 00:07:41.225 Got JSON-RPC error response 00:07:41.225 response: 00:07:41.225 { 00:07:41.225 "code": -17, 00:07:41.225 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:41.225 } 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.225 [2024-11-20 13:21:22.722437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:41.225 [2024-11-20 13:21:22.722525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.225 [2024-11-20 13:21:22.722559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:41.225 [2024-11-20 13:21:22.722585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.225 [2024-11-20 13:21:22.724719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.225 [2024-11-20 13:21:22.724788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:41.225 [2024-11-20 13:21:22.724874] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:41.225 [2024-11-20 13:21:22.724936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:41.225 pt1 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.225 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.225 "name": "raid_bdev1", 00:07:41.225 "uuid": "64cbe9ff-edea-4f40-9e84-302f07254c2e", 00:07:41.225 "strip_size_kb": 64, 00:07:41.225 "state": "configuring", 00:07:41.225 "raid_level": "concat", 00:07:41.225 "superblock": true, 00:07:41.225 "num_base_bdevs": 2, 00:07:41.225 "num_base_bdevs_discovered": 1, 00:07:41.225 "num_base_bdevs_operational": 2, 00:07:41.225 "base_bdevs_list": [ 00:07:41.225 { 00:07:41.225 "name": "pt1", 00:07:41.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.225 "is_configured": true, 00:07:41.225 "data_offset": 2048, 00:07:41.225 "data_size": 63488 00:07:41.225 }, 00:07:41.225 { 00:07:41.225 "name": null, 00:07:41.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.226 "is_configured": false, 00:07:41.226 "data_offset": 2048, 00:07:41.226 "data_size": 63488 00:07:41.226 } 00:07:41.226 ] 00:07:41.226 }' 00:07:41.226 13:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.226 13:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.485 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:41.485 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:41.485 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:41.485 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:41.485 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.485 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.744 [2024-11-20 13:21:23.153708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:41.744 [2024-11-20 13:21:23.153772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.744 [2024-11-20 13:21:23.153794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:41.744 [2024-11-20 13:21:23.153803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.744 [2024-11-20 13:21:23.154193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.744 [2024-11-20 13:21:23.154221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:41.744 [2024-11-20 13:21:23.154297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:41.744 [2024-11-20 13:21:23.154318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:41.744 [2024-11-20 13:21:23.154410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:41.744 [2024-11-20 13:21:23.154418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:41.744 [2024-11-20 13:21:23.154653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:41.744 [2024-11-20 13:21:23.154760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:41.745 [2024-11-20 13:21:23.154773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:41.745 [2024-11-20 13:21:23.154870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.745 pt2 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.745 "name": "raid_bdev1", 00:07:41.745 "uuid": "64cbe9ff-edea-4f40-9e84-302f07254c2e", 00:07:41.745 "strip_size_kb": 64, 00:07:41.745 "state": "online", 00:07:41.745 "raid_level": "concat", 00:07:41.745 "superblock": true, 00:07:41.745 "num_base_bdevs": 2, 00:07:41.745 "num_base_bdevs_discovered": 2, 00:07:41.745 "num_base_bdevs_operational": 2, 00:07:41.745 "base_bdevs_list": [ 00:07:41.745 { 00:07:41.745 "name": "pt1", 00:07:41.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.745 "is_configured": true, 00:07:41.745 "data_offset": 2048, 00:07:41.745 "data_size": 63488 00:07:41.745 }, 00:07:41.745 { 00:07:41.745 "name": "pt2", 00:07:41.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.745 "is_configured": true, 00:07:41.745 "data_offset": 2048, 00:07:41.745 "data_size": 63488 00:07:41.745 } 00:07:41.745 ] 00:07:41.745 }' 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.745 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.004 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:42.004 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:42.004 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.004 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.004 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.004 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.004 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:42.004 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.004 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.004 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.004 [2024-11-20 13:21:23.633158] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.004 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:42.264 "name": "raid_bdev1", 00:07:42.264 "aliases": [ 00:07:42.264 "64cbe9ff-edea-4f40-9e84-302f07254c2e" 00:07:42.264 ], 00:07:42.264 "product_name": "Raid Volume", 00:07:42.264 "block_size": 512, 00:07:42.264 "num_blocks": 126976, 00:07:42.264 "uuid": "64cbe9ff-edea-4f40-9e84-302f07254c2e", 00:07:42.264 "assigned_rate_limits": { 00:07:42.264 "rw_ios_per_sec": 0, 00:07:42.264 "rw_mbytes_per_sec": 0, 00:07:42.264 "r_mbytes_per_sec": 0, 00:07:42.264 "w_mbytes_per_sec": 0 00:07:42.264 }, 00:07:42.264 "claimed": false, 00:07:42.264 "zoned": false, 00:07:42.264 "supported_io_types": { 00:07:42.264 "read": true, 00:07:42.264 "write": true, 00:07:42.264 "unmap": true, 00:07:42.264 "flush": true, 00:07:42.264 "reset": true, 00:07:42.264 "nvme_admin": false, 00:07:42.264 "nvme_io": false, 00:07:42.264 "nvme_io_md": false, 00:07:42.264 "write_zeroes": true, 00:07:42.264 "zcopy": false, 00:07:42.264 "get_zone_info": false, 00:07:42.264 "zone_management": false, 00:07:42.264 "zone_append": false, 00:07:42.264 "compare": false, 00:07:42.264 "compare_and_write": false, 00:07:42.264 "abort": false, 00:07:42.264 "seek_hole": false, 00:07:42.264 "seek_data": false, 00:07:42.264 "copy": false, 00:07:42.264 "nvme_iov_md": false 00:07:42.264 }, 00:07:42.264 "memory_domains": [ 00:07:42.264 { 00:07:42.264 "dma_device_id": "system", 00:07:42.264 "dma_device_type": 1 00:07:42.264 }, 00:07:42.264 { 00:07:42.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.264 "dma_device_type": 2 00:07:42.264 }, 00:07:42.264 { 00:07:42.264 "dma_device_id": "system", 00:07:42.264 "dma_device_type": 1 00:07:42.264 }, 00:07:42.264 { 00:07:42.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.264 "dma_device_type": 2 00:07:42.264 } 00:07:42.264 ], 00:07:42.264 "driver_specific": { 00:07:42.264 "raid": { 00:07:42.264 "uuid": "64cbe9ff-edea-4f40-9e84-302f07254c2e", 00:07:42.264 "strip_size_kb": 64, 00:07:42.264 "state": "online", 00:07:42.264 "raid_level": "concat", 00:07:42.264 "superblock": true, 00:07:42.264 "num_base_bdevs": 2, 00:07:42.264 "num_base_bdevs_discovered": 2, 00:07:42.264 "num_base_bdevs_operational": 2, 00:07:42.264 "base_bdevs_list": [ 00:07:42.264 { 00:07:42.264 "name": "pt1", 00:07:42.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:42.264 "is_configured": true, 00:07:42.264 "data_offset": 2048, 00:07:42.264 "data_size": 63488 00:07:42.264 }, 00:07:42.264 { 00:07:42.264 "name": "pt2", 00:07:42.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:42.264 "is_configured": true, 00:07:42.264 "data_offset": 2048, 00:07:42.264 "data_size": 63488 00:07:42.264 } 00:07:42.264 ] 00:07:42.264 } 00:07:42.264 } 00:07:42.264 }' 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:42.264 pt2' 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:42.264 [2024-11-20 13:21:23.868779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 64cbe9ff-edea-4f40-9e84-302f07254c2e '!=' 64cbe9ff-edea-4f40-9e84-302f07254c2e ']' 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:42.264 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:42.265 13:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73230 00:07:42.265 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73230 ']' 00:07:42.265 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73230 00:07:42.265 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:42.265 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.265 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73230 00:07:42.524 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.524 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.524 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73230' 00:07:42.524 killing process with pid 73230 00:07:42.524 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73230 00:07:42.524 [2024-11-20 13:21:23.946002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.524 [2024-11-20 13:21:23.946174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.524 13:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73230 00:07:42.524 [2024-11-20 13:21:23.946262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.524 [2024-11-20 13:21:23.946273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:42.524 [2024-11-20 13:21:23.968867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.524 13:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:42.524 00:07:42.524 real 0m3.280s 00:07:42.524 user 0m5.118s 00:07:42.524 sys 0m0.662s 00:07:42.524 13:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.524 13:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.524 ************************************ 00:07:42.524 END TEST raid_superblock_test 00:07:42.524 ************************************ 00:07:42.783 13:21:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:42.783 13:21:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:42.783 13:21:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.783 13:21:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.783 ************************************ 00:07:42.783 START TEST raid_read_error_test 00:07:42.783 ************************************ 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nQ5xsOhRSK 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73425 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73425 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73425 ']' 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.783 13:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.784 13:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.784 13:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.784 13:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.784 [2024-11-20 13:21:24.342332] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:42.784 [2024-11-20 13:21:24.342551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73425 ] 00:07:43.042 [2024-11-20 13:21:24.496010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.042 [2024-11-20 13:21:24.520917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.042 [2024-11-20 13:21:24.563564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.042 [2024-11-20 13:21:24.563683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.611 BaseBdev1_malloc 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.611 true 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.611 [2024-11-20 13:21:25.201681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:43.611 [2024-11-20 13:21:25.201780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.611 [2024-11-20 13:21:25.201820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:43.611 [2024-11-20 13:21:25.201829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.611 [2024-11-20 13:21:25.203900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.611 [2024-11-20 13:21:25.203938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:43.611 BaseBdev1 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.611 BaseBdev2_malloc 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.611 true 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.611 [2024-11-20 13:21:25.242175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:43.611 [2024-11-20 13:21:25.242219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.611 [2024-11-20 13:21:25.242237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:43.611 [2024-11-20 13:21:25.242253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.611 [2024-11-20 13:21:25.244270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.611 [2024-11-20 13:21:25.244358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:43.611 BaseBdev2 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.611 [2024-11-20 13:21:25.254202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.611 [2024-11-20 13:21:25.256026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.611 [2024-11-20 13:21:25.256194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:43.611 [2024-11-20 13:21:25.256207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:43.611 [2024-11-20 13:21:25.256452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:43.611 [2024-11-20 13:21:25.256582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:43.611 [2024-11-20 13:21:25.256602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:43.611 [2024-11-20 13:21:25.256711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.611 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.871 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.871 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.871 "name": "raid_bdev1", 00:07:43.871 "uuid": "b2a0caf5-4d70-4534-8388-7887d59e2caa", 00:07:43.871 "strip_size_kb": 64, 00:07:43.871 "state": "online", 00:07:43.871 "raid_level": "concat", 00:07:43.871 "superblock": true, 00:07:43.871 "num_base_bdevs": 2, 00:07:43.871 "num_base_bdevs_discovered": 2, 00:07:43.871 "num_base_bdevs_operational": 2, 00:07:43.871 "base_bdevs_list": [ 00:07:43.871 { 00:07:43.871 "name": "BaseBdev1", 00:07:43.871 "uuid": "3e55f25d-7bcf-55f2-8dca-3153c5e4f967", 00:07:43.871 "is_configured": true, 00:07:43.871 "data_offset": 2048, 00:07:43.871 "data_size": 63488 00:07:43.871 }, 00:07:43.871 { 00:07:43.871 "name": "BaseBdev2", 00:07:43.871 "uuid": "be489597-3da9-5a65-aabf-5394eea4b293", 00:07:43.871 "is_configured": true, 00:07:43.871 "data_offset": 2048, 00:07:43.871 "data_size": 63488 00:07:43.871 } 00:07:43.871 ] 00:07:43.871 }' 00:07:43.871 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.871 13:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.130 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:44.130 13:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:44.130 [2024-11-20 13:21:25.757710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.067 13:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.327 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.327 "name": "raid_bdev1", 00:07:45.327 "uuid": "b2a0caf5-4d70-4534-8388-7887d59e2caa", 00:07:45.327 "strip_size_kb": 64, 00:07:45.327 "state": "online", 00:07:45.327 "raid_level": "concat", 00:07:45.327 "superblock": true, 00:07:45.327 "num_base_bdevs": 2, 00:07:45.327 "num_base_bdevs_discovered": 2, 00:07:45.327 "num_base_bdevs_operational": 2, 00:07:45.327 "base_bdevs_list": [ 00:07:45.327 { 00:07:45.327 "name": "BaseBdev1", 00:07:45.327 "uuid": "3e55f25d-7bcf-55f2-8dca-3153c5e4f967", 00:07:45.327 "is_configured": true, 00:07:45.327 "data_offset": 2048, 00:07:45.327 "data_size": 63488 00:07:45.327 }, 00:07:45.327 { 00:07:45.327 "name": "BaseBdev2", 00:07:45.327 "uuid": "be489597-3da9-5a65-aabf-5394eea4b293", 00:07:45.327 "is_configured": true, 00:07:45.327 "data_offset": 2048, 00:07:45.327 "data_size": 63488 00:07:45.327 } 00:07:45.327 ] 00:07:45.327 }' 00:07:45.327 13:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.327 13:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.586 13:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.586 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.586 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.586 [2024-11-20 13:21:27.133547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.586 [2024-11-20 13:21:27.133650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.586 [2024-11-20 13:21:27.136245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.586 [2024-11-20 13:21:27.136296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.587 [2024-11-20 13:21:27.136331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.587 [2024-11-20 13:21:27.136341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:45.587 { 00:07:45.587 "results": [ 00:07:45.587 { 00:07:45.587 "job": "raid_bdev1", 00:07:45.587 "core_mask": "0x1", 00:07:45.587 "workload": "randrw", 00:07:45.587 "percentage": 50, 00:07:45.587 "status": "finished", 00:07:45.587 "queue_depth": 1, 00:07:45.587 "io_size": 131072, 00:07:45.587 "runtime": 1.376674, 00:07:45.587 "iops": 17131.86999972397, 00:07:45.587 "mibps": 2141.4837499654964, 00:07:45.587 "io_failed": 1, 00:07:45.587 "io_timeout": 0, 00:07:45.587 "avg_latency_us": 80.70908528743828, 00:07:45.587 "min_latency_us": 24.593886462882097, 00:07:45.587 "max_latency_us": 1387.989519650655 00:07:45.587 } 00:07:45.587 ], 00:07:45.587 "core_count": 1 00:07:45.587 } 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73425 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73425 ']' 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73425 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73425 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73425' 00:07:45.587 killing process with pid 73425 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73425 00:07:45.587 [2024-11-20 13:21:27.189371] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.587 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73425 00:07:45.587 [2024-11-20 13:21:27.205117] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.846 13:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nQ5xsOhRSK 00:07:45.846 13:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:45.846 13:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:45.846 13:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:45.846 13:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:45.846 13:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.846 13:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.846 13:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:45.846 ************************************ 00:07:45.846 END TEST raid_read_error_test 00:07:45.846 ************************************ 00:07:45.846 00:07:45.846 real 0m3.168s 00:07:45.846 user 0m4.070s 00:07:45.846 sys 0m0.463s 00:07:45.846 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.846 13:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.846 13:21:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:45.846 13:21:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:45.846 13:21:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.846 13:21:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.846 ************************************ 00:07:45.846 START TEST raid_write_error_test 00:07:45.846 ************************************ 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:45.846 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4AoKFIs81s 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73554 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73554 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73554 ']' 00:07:45.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.847 13:21:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.113 [2024-11-20 13:21:27.590588] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:46.113 [2024-11-20 13:21:27.590730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73554 ] 00:07:46.113 [2024-11-20 13:21:27.744233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.113 [2024-11-20 13:21:27.769380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.414 [2024-11-20 13:21:27.812225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.414 [2024-11-20 13:21:27.812260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.982 BaseBdev1_malloc 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.982 true 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.982 [2024-11-20 13:21:28.442557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:46.982 [2024-11-20 13:21:28.442616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.982 [2024-11-20 13:21:28.442652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:46.982 [2024-11-20 13:21:28.442661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.982 [2024-11-20 13:21:28.444760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.982 [2024-11-20 13:21:28.444797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:46.982 BaseBdev1 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.982 BaseBdev2_malloc 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.982 true 00:07:46.982 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.983 [2024-11-20 13:21:28.483077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:46.983 [2024-11-20 13:21:28.483120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.983 [2024-11-20 13:21:28.483155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:46.983 [2024-11-20 13:21:28.483171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.983 [2024-11-20 13:21:28.485232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.983 [2024-11-20 13:21:28.485308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:46.983 BaseBdev2 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.983 [2024-11-20 13:21:28.495086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.983 [2024-11-20 13:21:28.496950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.983 [2024-11-20 13:21:28.497133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:46.983 [2024-11-20 13:21:28.497146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.983 [2024-11-20 13:21:28.497377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:46.983 [2024-11-20 13:21:28.497489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:46.983 [2024-11-20 13:21:28.497501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:46.983 [2024-11-20 13:21:28.497622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.983 "name": "raid_bdev1", 00:07:46.983 "uuid": "8935e0dd-fe51-4cf8-830b-791405b71910", 00:07:46.983 "strip_size_kb": 64, 00:07:46.983 "state": "online", 00:07:46.983 "raid_level": "concat", 00:07:46.983 "superblock": true, 00:07:46.983 "num_base_bdevs": 2, 00:07:46.983 "num_base_bdevs_discovered": 2, 00:07:46.983 "num_base_bdevs_operational": 2, 00:07:46.983 "base_bdevs_list": [ 00:07:46.983 { 00:07:46.983 "name": "BaseBdev1", 00:07:46.983 "uuid": "92cc5d93-6cf6-5f09-b495-05ef9b5a9a06", 00:07:46.983 "is_configured": true, 00:07:46.983 "data_offset": 2048, 00:07:46.983 "data_size": 63488 00:07:46.983 }, 00:07:46.983 { 00:07:46.983 "name": "BaseBdev2", 00:07:46.983 "uuid": "69d7ed1b-a239-529e-a747-9ebd49792a18", 00:07:46.983 "is_configured": true, 00:07:46.983 "data_offset": 2048, 00:07:46.983 "data_size": 63488 00:07:46.983 } 00:07:46.983 ] 00:07:46.983 }' 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.983 13:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.551 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:47.551 13:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:47.551 [2024-11-20 13:21:29.054535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.486 13:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.486 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.487 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.487 "name": "raid_bdev1", 00:07:48.487 "uuid": "8935e0dd-fe51-4cf8-830b-791405b71910", 00:07:48.487 "strip_size_kb": 64, 00:07:48.487 "state": "online", 00:07:48.487 "raid_level": "concat", 00:07:48.487 "superblock": true, 00:07:48.487 "num_base_bdevs": 2, 00:07:48.487 "num_base_bdevs_discovered": 2, 00:07:48.487 "num_base_bdevs_operational": 2, 00:07:48.487 "base_bdevs_list": [ 00:07:48.487 { 00:07:48.487 "name": "BaseBdev1", 00:07:48.487 "uuid": "92cc5d93-6cf6-5f09-b495-05ef9b5a9a06", 00:07:48.487 "is_configured": true, 00:07:48.487 "data_offset": 2048, 00:07:48.487 "data_size": 63488 00:07:48.487 }, 00:07:48.487 { 00:07:48.487 "name": "BaseBdev2", 00:07:48.487 "uuid": "69d7ed1b-a239-529e-a747-9ebd49792a18", 00:07:48.487 "is_configured": true, 00:07:48.487 "data_offset": 2048, 00:07:48.487 "data_size": 63488 00:07:48.487 } 00:07:48.487 ] 00:07:48.487 }' 00:07:48.487 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.487 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.054 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.054 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.054 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.054 [2024-11-20 13:21:30.442779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.055 [2024-11-20 13:21:30.442810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.055 [2024-11-20 13:21:30.445233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.055 [2024-11-20 13:21:30.445279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.055 [2024-11-20 13:21:30.445312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.055 [2024-11-20 13:21:30.445321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:49.055 { 00:07:49.055 "results": [ 00:07:49.055 { 00:07:49.055 "job": "raid_bdev1", 00:07:49.055 "core_mask": "0x1", 00:07:49.055 "workload": "randrw", 00:07:49.055 "percentage": 50, 00:07:49.055 "status": "finished", 00:07:49.055 "queue_depth": 1, 00:07:49.055 "io_size": 131072, 00:07:49.055 "runtime": 1.388989, 00:07:49.055 "iops": 17285.23408032749, 00:07:49.055 "mibps": 2160.654260040936, 00:07:49.055 "io_failed": 1, 00:07:49.055 "io_timeout": 0, 00:07:49.055 "avg_latency_us": 80.02143269998491, 00:07:49.055 "min_latency_us": 24.482096069868994, 00:07:49.055 "max_latency_us": 1387.989519650655 00:07:49.055 } 00:07:49.055 ], 00:07:49.055 "core_count": 1 00:07:49.055 } 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73554 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73554 ']' 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73554 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73554 00:07:49.055 killing process with pid 73554 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73554' 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73554 00:07:49.055 [2024-11-20 13:21:30.478340] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73554 00:07:49.055 [2024-11-20 13:21:30.493464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4AoKFIs81s 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.055 ************************************ 00:07:49.055 END TEST raid_write_error_test 00:07:49.055 ************************************ 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:49.055 00:07:49.055 real 0m3.216s 00:07:49.055 user 0m4.161s 00:07:49.055 sys 0m0.471s 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.055 13:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.315 13:21:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:49.315 13:21:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:49.315 13:21:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.315 13:21:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.315 13:21:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.315 ************************************ 00:07:49.315 START TEST raid_state_function_test 00:07:49.315 ************************************ 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:49.315 Process raid pid: 73681 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73681 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73681' 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73681 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73681 ']' 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.315 13:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.315 [2024-11-20 13:21:30.863949] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:49.315 [2024-11-20 13:21:30.864175] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.575 [2024-11-20 13:21:31.019052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.575 [2024-11-20 13:21:31.046012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.575 [2024-11-20 13:21:31.088820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.575 [2024-11-20 13:21:31.088942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.142 [2024-11-20 13:21:31.694397] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.142 [2024-11-20 13:21:31.694459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.142 [2024-11-20 13:21:31.694477] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.142 [2024-11-20 13:21:31.694488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.142 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.142 "name": "Existed_Raid", 00:07:50.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.142 "strip_size_kb": 0, 00:07:50.142 "state": "configuring", 00:07:50.142 "raid_level": "raid1", 00:07:50.142 "superblock": false, 00:07:50.142 "num_base_bdevs": 2, 00:07:50.142 "num_base_bdevs_discovered": 0, 00:07:50.142 "num_base_bdevs_operational": 2, 00:07:50.142 "base_bdevs_list": [ 00:07:50.142 { 00:07:50.142 "name": "BaseBdev1", 00:07:50.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.142 "is_configured": false, 00:07:50.142 "data_offset": 0, 00:07:50.142 "data_size": 0 00:07:50.142 }, 00:07:50.142 { 00:07:50.142 "name": "BaseBdev2", 00:07:50.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.142 "is_configured": false, 00:07:50.142 "data_offset": 0, 00:07:50.142 "data_size": 0 00:07:50.142 } 00:07:50.142 ] 00:07:50.143 }' 00:07:50.143 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.143 13:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 [2024-11-20 13:21:32.185484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.711 [2024-11-20 13:21:32.185526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 [2024-11-20 13:21:32.197453] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.711 [2024-11-20 13:21:32.197496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.711 [2024-11-20 13:21:32.197504] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.711 [2024-11-20 13:21:32.197538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 [2024-11-20 13:21:32.218389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.711 BaseBdev1 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 [ 00:07:50.711 { 00:07:50.711 "name": "BaseBdev1", 00:07:50.711 "aliases": [ 00:07:50.711 "ead79c47-4528-4e97-9d3b-96469fa60d85" 00:07:50.711 ], 00:07:50.711 "product_name": "Malloc disk", 00:07:50.711 "block_size": 512, 00:07:50.711 "num_blocks": 65536, 00:07:50.711 "uuid": "ead79c47-4528-4e97-9d3b-96469fa60d85", 00:07:50.711 "assigned_rate_limits": { 00:07:50.711 "rw_ios_per_sec": 0, 00:07:50.711 "rw_mbytes_per_sec": 0, 00:07:50.711 "r_mbytes_per_sec": 0, 00:07:50.711 "w_mbytes_per_sec": 0 00:07:50.711 }, 00:07:50.711 "claimed": true, 00:07:50.711 "claim_type": "exclusive_write", 00:07:50.711 "zoned": false, 00:07:50.711 "supported_io_types": { 00:07:50.711 "read": true, 00:07:50.711 "write": true, 00:07:50.711 "unmap": true, 00:07:50.711 "flush": true, 00:07:50.711 "reset": true, 00:07:50.711 "nvme_admin": false, 00:07:50.711 "nvme_io": false, 00:07:50.711 "nvme_io_md": false, 00:07:50.711 "write_zeroes": true, 00:07:50.711 "zcopy": true, 00:07:50.711 "get_zone_info": false, 00:07:50.711 "zone_management": false, 00:07:50.711 "zone_append": false, 00:07:50.711 "compare": false, 00:07:50.711 "compare_and_write": false, 00:07:50.711 "abort": true, 00:07:50.711 "seek_hole": false, 00:07:50.711 "seek_data": false, 00:07:50.711 "copy": true, 00:07:50.711 "nvme_iov_md": false 00:07:50.711 }, 00:07:50.711 "memory_domains": [ 00:07:50.711 { 00:07:50.711 "dma_device_id": "system", 00:07:50.711 "dma_device_type": 1 00:07:50.711 }, 00:07:50.711 { 00:07:50.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.711 "dma_device_type": 2 00:07:50.711 } 00:07:50.711 ], 00:07:50.711 "driver_specific": {} 00:07:50.711 } 00:07:50.711 ] 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.711 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.711 "name": "Existed_Raid", 00:07:50.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.711 "strip_size_kb": 0, 00:07:50.711 "state": "configuring", 00:07:50.711 "raid_level": "raid1", 00:07:50.711 "superblock": false, 00:07:50.711 "num_base_bdevs": 2, 00:07:50.711 "num_base_bdevs_discovered": 1, 00:07:50.711 "num_base_bdevs_operational": 2, 00:07:50.711 "base_bdevs_list": [ 00:07:50.711 { 00:07:50.711 "name": "BaseBdev1", 00:07:50.711 "uuid": "ead79c47-4528-4e97-9d3b-96469fa60d85", 00:07:50.711 "is_configured": true, 00:07:50.711 "data_offset": 0, 00:07:50.711 "data_size": 65536 00:07:50.711 }, 00:07:50.711 { 00:07:50.711 "name": "BaseBdev2", 00:07:50.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.711 "is_configured": false, 00:07:50.711 "data_offset": 0, 00:07:50.712 "data_size": 0 00:07:50.712 } 00:07:50.712 ] 00:07:50.712 }' 00:07:50.712 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.712 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.280 [2024-11-20 13:21:32.709566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.280 [2024-11-20 13:21:32.709615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.280 [2024-11-20 13:21:32.721557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.280 [2024-11-20 13:21:32.723399] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.280 [2024-11-20 13:21:32.723439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.280 "name": "Existed_Raid", 00:07:51.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.280 "strip_size_kb": 0, 00:07:51.280 "state": "configuring", 00:07:51.280 "raid_level": "raid1", 00:07:51.280 "superblock": false, 00:07:51.280 "num_base_bdevs": 2, 00:07:51.280 "num_base_bdevs_discovered": 1, 00:07:51.280 "num_base_bdevs_operational": 2, 00:07:51.280 "base_bdevs_list": [ 00:07:51.280 { 00:07:51.280 "name": "BaseBdev1", 00:07:51.280 "uuid": "ead79c47-4528-4e97-9d3b-96469fa60d85", 00:07:51.280 "is_configured": true, 00:07:51.280 "data_offset": 0, 00:07:51.280 "data_size": 65536 00:07:51.280 }, 00:07:51.280 { 00:07:51.280 "name": "BaseBdev2", 00:07:51.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.280 "is_configured": false, 00:07:51.280 "data_offset": 0, 00:07:51.280 "data_size": 0 00:07:51.280 } 00:07:51.280 ] 00:07:51.280 }' 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.280 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.539 [2024-11-20 13:21:33.171868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.539 [2024-11-20 13:21:33.172008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:51.539 [2024-11-20 13:21:33.172036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:51.539 [2024-11-20 13:21:33.172359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:51.539 [2024-11-20 13:21:33.172548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:51.539 [2024-11-20 13:21:33.172597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:51.539 [2024-11-20 13:21:33.172849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.539 BaseBdev2 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.539 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.539 [ 00:07:51.539 { 00:07:51.539 "name": "BaseBdev2", 00:07:51.539 "aliases": [ 00:07:51.539 "754da450-98da-4df5-a6f1-c2608e342add" 00:07:51.539 ], 00:07:51.539 "product_name": "Malloc disk", 00:07:51.539 "block_size": 512, 00:07:51.539 "num_blocks": 65536, 00:07:51.539 "uuid": "754da450-98da-4df5-a6f1-c2608e342add", 00:07:51.539 "assigned_rate_limits": { 00:07:51.539 "rw_ios_per_sec": 0, 00:07:51.539 "rw_mbytes_per_sec": 0, 00:07:51.539 "r_mbytes_per_sec": 0, 00:07:51.539 "w_mbytes_per_sec": 0 00:07:51.539 }, 00:07:51.539 "claimed": true, 00:07:51.539 "claim_type": "exclusive_write", 00:07:51.539 "zoned": false, 00:07:51.539 "supported_io_types": { 00:07:51.539 "read": true, 00:07:51.539 "write": true, 00:07:51.539 "unmap": true, 00:07:51.539 "flush": true, 00:07:51.798 "reset": true, 00:07:51.798 "nvme_admin": false, 00:07:51.798 "nvme_io": false, 00:07:51.798 "nvme_io_md": false, 00:07:51.798 "write_zeroes": true, 00:07:51.798 "zcopy": true, 00:07:51.798 "get_zone_info": false, 00:07:51.798 "zone_management": false, 00:07:51.798 "zone_append": false, 00:07:51.798 "compare": false, 00:07:51.798 "compare_and_write": false, 00:07:51.798 "abort": true, 00:07:51.798 "seek_hole": false, 00:07:51.798 "seek_data": false, 00:07:51.798 "copy": true, 00:07:51.798 "nvme_iov_md": false 00:07:51.798 }, 00:07:51.798 "memory_domains": [ 00:07:51.798 { 00:07:51.798 "dma_device_id": "system", 00:07:51.798 "dma_device_type": 1 00:07:51.798 }, 00:07:51.798 { 00:07:51.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.798 "dma_device_type": 2 00:07:51.798 } 00:07:51.798 ], 00:07:51.798 "driver_specific": {} 00:07:51.798 } 00:07:51.798 ] 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.798 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.798 "name": "Existed_Raid", 00:07:51.798 "uuid": "56f41d55-6c64-4430-9286-842e8768a948", 00:07:51.798 "strip_size_kb": 0, 00:07:51.798 "state": "online", 00:07:51.798 "raid_level": "raid1", 00:07:51.798 "superblock": false, 00:07:51.798 "num_base_bdevs": 2, 00:07:51.798 "num_base_bdevs_discovered": 2, 00:07:51.798 "num_base_bdevs_operational": 2, 00:07:51.798 "base_bdevs_list": [ 00:07:51.798 { 00:07:51.798 "name": "BaseBdev1", 00:07:51.798 "uuid": "ead79c47-4528-4e97-9d3b-96469fa60d85", 00:07:51.798 "is_configured": true, 00:07:51.798 "data_offset": 0, 00:07:51.798 "data_size": 65536 00:07:51.798 }, 00:07:51.799 { 00:07:51.799 "name": "BaseBdev2", 00:07:51.799 "uuid": "754da450-98da-4df5-a6f1-c2608e342add", 00:07:51.799 "is_configured": true, 00:07:51.799 "data_offset": 0, 00:07:51.799 "data_size": 65536 00:07:51.799 } 00:07:51.799 ] 00:07:51.799 }' 00:07:51.799 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.799 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.057 [2024-11-20 13:21:33.643401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.057 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.057 "name": "Existed_Raid", 00:07:52.057 "aliases": [ 00:07:52.057 "56f41d55-6c64-4430-9286-842e8768a948" 00:07:52.057 ], 00:07:52.057 "product_name": "Raid Volume", 00:07:52.057 "block_size": 512, 00:07:52.057 "num_blocks": 65536, 00:07:52.057 "uuid": "56f41d55-6c64-4430-9286-842e8768a948", 00:07:52.057 "assigned_rate_limits": { 00:07:52.057 "rw_ios_per_sec": 0, 00:07:52.057 "rw_mbytes_per_sec": 0, 00:07:52.057 "r_mbytes_per_sec": 0, 00:07:52.057 "w_mbytes_per_sec": 0 00:07:52.057 }, 00:07:52.057 "claimed": false, 00:07:52.057 "zoned": false, 00:07:52.057 "supported_io_types": { 00:07:52.057 "read": true, 00:07:52.057 "write": true, 00:07:52.057 "unmap": false, 00:07:52.057 "flush": false, 00:07:52.057 "reset": true, 00:07:52.057 "nvme_admin": false, 00:07:52.057 "nvme_io": false, 00:07:52.057 "nvme_io_md": false, 00:07:52.057 "write_zeroes": true, 00:07:52.057 "zcopy": false, 00:07:52.057 "get_zone_info": false, 00:07:52.057 "zone_management": false, 00:07:52.057 "zone_append": false, 00:07:52.057 "compare": false, 00:07:52.057 "compare_and_write": false, 00:07:52.057 "abort": false, 00:07:52.057 "seek_hole": false, 00:07:52.057 "seek_data": false, 00:07:52.057 "copy": false, 00:07:52.057 "nvme_iov_md": false 00:07:52.057 }, 00:07:52.057 "memory_domains": [ 00:07:52.057 { 00:07:52.057 "dma_device_id": "system", 00:07:52.057 "dma_device_type": 1 00:07:52.057 }, 00:07:52.057 { 00:07:52.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.057 "dma_device_type": 2 00:07:52.057 }, 00:07:52.057 { 00:07:52.057 "dma_device_id": "system", 00:07:52.057 "dma_device_type": 1 00:07:52.057 }, 00:07:52.057 { 00:07:52.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.057 "dma_device_type": 2 00:07:52.057 } 00:07:52.057 ], 00:07:52.057 "driver_specific": { 00:07:52.057 "raid": { 00:07:52.057 "uuid": "56f41d55-6c64-4430-9286-842e8768a948", 00:07:52.057 "strip_size_kb": 0, 00:07:52.057 "state": "online", 00:07:52.057 "raid_level": "raid1", 00:07:52.057 "superblock": false, 00:07:52.057 "num_base_bdevs": 2, 00:07:52.057 "num_base_bdevs_discovered": 2, 00:07:52.057 "num_base_bdevs_operational": 2, 00:07:52.057 "base_bdevs_list": [ 00:07:52.057 { 00:07:52.057 "name": "BaseBdev1", 00:07:52.057 "uuid": "ead79c47-4528-4e97-9d3b-96469fa60d85", 00:07:52.057 "is_configured": true, 00:07:52.057 "data_offset": 0, 00:07:52.057 "data_size": 65536 00:07:52.057 }, 00:07:52.057 { 00:07:52.057 "name": "BaseBdev2", 00:07:52.057 "uuid": "754da450-98da-4df5-a6f1-c2608e342add", 00:07:52.057 "is_configured": true, 00:07:52.057 "data_offset": 0, 00:07:52.057 "data_size": 65536 00:07:52.057 } 00:07:52.057 ] 00:07:52.057 } 00:07:52.057 } 00:07:52.057 }' 00:07:52.058 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:52.317 BaseBdev2' 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.317 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.318 [2024-11-20 13:21:33.894724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.318 "name": "Existed_Raid", 00:07:52.318 "uuid": "56f41d55-6c64-4430-9286-842e8768a948", 00:07:52.318 "strip_size_kb": 0, 00:07:52.318 "state": "online", 00:07:52.318 "raid_level": "raid1", 00:07:52.318 "superblock": false, 00:07:52.318 "num_base_bdevs": 2, 00:07:52.318 "num_base_bdevs_discovered": 1, 00:07:52.318 "num_base_bdevs_operational": 1, 00:07:52.318 "base_bdevs_list": [ 00:07:52.318 { 00:07:52.318 "name": null, 00:07:52.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.318 "is_configured": false, 00:07:52.318 "data_offset": 0, 00:07:52.318 "data_size": 65536 00:07:52.318 }, 00:07:52.318 { 00:07:52.318 "name": "BaseBdev2", 00:07:52.318 "uuid": "754da450-98da-4df5-a6f1-c2608e342add", 00:07:52.318 "is_configured": true, 00:07:52.318 "data_offset": 0, 00:07:52.318 "data_size": 65536 00:07:52.318 } 00:07:52.318 ] 00:07:52.318 }' 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.318 13:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.920 [2024-11-20 13:21:34.401132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:52.920 [2024-11-20 13:21:34.401277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.920 [2024-11-20 13:21:34.412765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.920 [2024-11-20 13:21:34.412884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.920 [2024-11-20 13:21:34.412926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73681 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73681 ']' 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73681 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73681 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73681' 00:07:52.920 killing process with pid 73681 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73681 00:07:52.920 [2024-11-20 13:21:34.495683] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.920 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73681 00:07:52.920 [2024-11-20 13:21:34.496706] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:53.179 ************************************ 00:07:53.179 END TEST raid_state_function_test 00:07:53.179 ************************************ 00:07:53.179 00:07:53.179 real 0m3.933s 00:07:53.179 user 0m6.266s 00:07:53.179 sys 0m0.745s 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.179 13:21:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:53.179 13:21:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:53.179 13:21:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.179 13:21:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.179 ************************************ 00:07:53.179 START TEST raid_state_function_test_sb 00:07:53.179 ************************************ 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:53.179 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:53.180 Process raid pid: 73923 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73923 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73923' 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73923 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73923 ']' 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.180 13:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.438 [2024-11-20 13:21:34.863500] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:53.438 [2024-11-20 13:21:34.863722] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.438 [2024-11-20 13:21:35.019184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.438 [2024-11-20 13:21:35.044729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.438 [2024-11-20 13:21:35.087505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.438 [2024-11-20 13:21:35.087574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.375 [2024-11-20 13:21:35.697095] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:54.375 [2024-11-20 13:21:35.697151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:54.375 [2024-11-20 13:21:35.697161] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.375 [2024-11-20 13:21:35.697173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.375 "name": "Existed_Raid", 00:07:54.375 "uuid": "70624114-357b-40d9-b281-80e9ca9bcc9b", 00:07:54.375 "strip_size_kb": 0, 00:07:54.375 "state": "configuring", 00:07:54.375 "raid_level": "raid1", 00:07:54.375 "superblock": true, 00:07:54.375 "num_base_bdevs": 2, 00:07:54.375 "num_base_bdevs_discovered": 0, 00:07:54.375 "num_base_bdevs_operational": 2, 00:07:54.375 "base_bdevs_list": [ 00:07:54.375 { 00:07:54.375 "name": "BaseBdev1", 00:07:54.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.375 "is_configured": false, 00:07:54.375 "data_offset": 0, 00:07:54.375 "data_size": 0 00:07:54.375 }, 00:07:54.375 { 00:07:54.375 "name": "BaseBdev2", 00:07:54.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.375 "is_configured": false, 00:07:54.375 "data_offset": 0, 00:07:54.375 "data_size": 0 00:07:54.375 } 00:07:54.375 ] 00:07:54.375 }' 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.375 13:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.635 [2024-11-20 13:21:36.164185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.635 [2024-11-20 13:21:36.164230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.635 [2024-11-20 13:21:36.176162] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:54.635 [2024-11-20 13:21:36.176202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:54.635 [2024-11-20 13:21:36.176211] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.635 [2024-11-20 13:21:36.176229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.635 [2024-11-20 13:21:36.197168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.635 BaseBdev1 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.635 [ 00:07:54.635 { 00:07:54.635 "name": "BaseBdev1", 00:07:54.635 "aliases": [ 00:07:54.635 "3dec1690-da31-4386-bab8-c906a33b126a" 00:07:54.635 ], 00:07:54.635 "product_name": "Malloc disk", 00:07:54.635 "block_size": 512, 00:07:54.635 "num_blocks": 65536, 00:07:54.635 "uuid": "3dec1690-da31-4386-bab8-c906a33b126a", 00:07:54.635 "assigned_rate_limits": { 00:07:54.635 "rw_ios_per_sec": 0, 00:07:54.635 "rw_mbytes_per_sec": 0, 00:07:54.635 "r_mbytes_per_sec": 0, 00:07:54.635 "w_mbytes_per_sec": 0 00:07:54.635 }, 00:07:54.635 "claimed": true, 00:07:54.635 "claim_type": "exclusive_write", 00:07:54.635 "zoned": false, 00:07:54.635 "supported_io_types": { 00:07:54.635 "read": true, 00:07:54.635 "write": true, 00:07:54.635 "unmap": true, 00:07:54.635 "flush": true, 00:07:54.635 "reset": true, 00:07:54.635 "nvme_admin": false, 00:07:54.635 "nvme_io": false, 00:07:54.635 "nvme_io_md": false, 00:07:54.635 "write_zeroes": true, 00:07:54.635 "zcopy": true, 00:07:54.635 "get_zone_info": false, 00:07:54.635 "zone_management": false, 00:07:54.635 "zone_append": false, 00:07:54.635 "compare": false, 00:07:54.635 "compare_and_write": false, 00:07:54.635 "abort": true, 00:07:54.635 "seek_hole": false, 00:07:54.635 "seek_data": false, 00:07:54.635 "copy": true, 00:07:54.635 "nvme_iov_md": false 00:07:54.635 }, 00:07:54.635 "memory_domains": [ 00:07:54.635 { 00:07:54.635 "dma_device_id": "system", 00:07:54.635 "dma_device_type": 1 00:07:54.635 }, 00:07:54.635 { 00:07:54.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.635 "dma_device_type": 2 00:07:54.635 } 00:07:54.635 ], 00:07:54.635 "driver_specific": {} 00:07:54.635 } 00:07:54.635 ] 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.635 "name": "Existed_Raid", 00:07:54.635 "uuid": "38c57aed-06f2-4cc8-99f4-2434dee7c5e5", 00:07:54.635 "strip_size_kb": 0, 00:07:54.635 "state": "configuring", 00:07:54.635 "raid_level": "raid1", 00:07:54.635 "superblock": true, 00:07:54.635 "num_base_bdevs": 2, 00:07:54.635 "num_base_bdevs_discovered": 1, 00:07:54.635 "num_base_bdevs_operational": 2, 00:07:54.635 "base_bdevs_list": [ 00:07:54.635 { 00:07:54.635 "name": "BaseBdev1", 00:07:54.635 "uuid": "3dec1690-da31-4386-bab8-c906a33b126a", 00:07:54.635 "is_configured": true, 00:07:54.635 "data_offset": 2048, 00:07:54.635 "data_size": 63488 00:07:54.635 }, 00:07:54.635 { 00:07:54.635 "name": "BaseBdev2", 00:07:54.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.635 "is_configured": false, 00:07:54.635 "data_offset": 0, 00:07:54.635 "data_size": 0 00:07:54.635 } 00:07:54.635 ] 00:07:54.635 }' 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.635 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.204 [2024-11-20 13:21:36.668451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.204 [2024-11-20 13:21:36.668557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.204 [2024-11-20 13:21:36.680465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.204 [2024-11-20 13:21:36.682478] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.204 [2024-11-20 13:21:36.682519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.204 "name": "Existed_Raid", 00:07:55.204 "uuid": "625f9e21-1f25-4b45-9d47-5c06ee3f2951", 00:07:55.204 "strip_size_kb": 0, 00:07:55.204 "state": "configuring", 00:07:55.204 "raid_level": "raid1", 00:07:55.204 "superblock": true, 00:07:55.204 "num_base_bdevs": 2, 00:07:55.204 "num_base_bdevs_discovered": 1, 00:07:55.204 "num_base_bdevs_operational": 2, 00:07:55.204 "base_bdevs_list": [ 00:07:55.204 { 00:07:55.204 "name": "BaseBdev1", 00:07:55.204 "uuid": "3dec1690-da31-4386-bab8-c906a33b126a", 00:07:55.204 "is_configured": true, 00:07:55.204 "data_offset": 2048, 00:07:55.204 "data_size": 63488 00:07:55.204 }, 00:07:55.204 { 00:07:55.204 "name": "BaseBdev2", 00:07:55.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.204 "is_configured": false, 00:07:55.204 "data_offset": 0, 00:07:55.204 "data_size": 0 00:07:55.204 } 00:07:55.204 ] 00:07:55.204 }' 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.204 13:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.463 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:55.463 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.463 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.463 [2024-11-20 13:21:37.094730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.463 [2024-11-20 13:21:37.095029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:55.463 [2024-11-20 13:21:37.095083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:55.463 BaseBdev2 00:07:55.463 [2024-11-20 13:21:37.095378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:55.463 [2024-11-20 13:21:37.095541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:55.464 [2024-11-20 13:21:37.095600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:55.464 [2024-11-20 13:21:37.095742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.464 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.464 [ 00:07:55.464 { 00:07:55.464 "name": "BaseBdev2", 00:07:55.464 "aliases": [ 00:07:55.464 "34204503-04a0-4937-9f9b-d9fc40b2fb8f" 00:07:55.464 ], 00:07:55.464 "product_name": "Malloc disk", 00:07:55.464 "block_size": 512, 00:07:55.464 "num_blocks": 65536, 00:07:55.464 "uuid": "34204503-04a0-4937-9f9b-d9fc40b2fb8f", 00:07:55.464 "assigned_rate_limits": { 00:07:55.464 "rw_ios_per_sec": 0, 00:07:55.464 "rw_mbytes_per_sec": 0, 00:07:55.464 "r_mbytes_per_sec": 0, 00:07:55.464 "w_mbytes_per_sec": 0 00:07:55.464 }, 00:07:55.464 "claimed": true, 00:07:55.464 "claim_type": "exclusive_write", 00:07:55.464 "zoned": false, 00:07:55.464 "supported_io_types": { 00:07:55.464 "read": true, 00:07:55.464 "write": true, 00:07:55.464 "unmap": true, 00:07:55.464 "flush": true, 00:07:55.464 "reset": true, 00:07:55.464 "nvme_admin": false, 00:07:55.464 "nvme_io": false, 00:07:55.464 "nvme_io_md": false, 00:07:55.464 "write_zeroes": true, 00:07:55.464 "zcopy": true, 00:07:55.464 "get_zone_info": false, 00:07:55.464 "zone_management": false, 00:07:55.464 "zone_append": false, 00:07:55.464 "compare": false, 00:07:55.464 "compare_and_write": false, 00:07:55.464 "abort": true, 00:07:55.464 "seek_hole": false, 00:07:55.464 "seek_data": false, 00:07:55.464 "copy": true, 00:07:55.464 "nvme_iov_md": false 00:07:55.464 }, 00:07:55.464 "memory_domains": [ 00:07:55.464 { 00:07:55.464 "dma_device_id": "system", 00:07:55.464 "dma_device_type": 1 00:07:55.464 }, 00:07:55.464 { 00:07:55.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.464 "dma_device_type": 2 00:07:55.464 } 00:07:55.464 ], 00:07:55.723 "driver_specific": {} 00:07:55.723 } 00:07:55.723 ] 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.723 "name": "Existed_Raid", 00:07:55.723 "uuid": "625f9e21-1f25-4b45-9d47-5c06ee3f2951", 00:07:55.723 "strip_size_kb": 0, 00:07:55.723 "state": "online", 00:07:55.723 "raid_level": "raid1", 00:07:55.723 "superblock": true, 00:07:55.723 "num_base_bdevs": 2, 00:07:55.723 "num_base_bdevs_discovered": 2, 00:07:55.723 "num_base_bdevs_operational": 2, 00:07:55.723 "base_bdevs_list": [ 00:07:55.723 { 00:07:55.723 "name": "BaseBdev1", 00:07:55.723 "uuid": "3dec1690-da31-4386-bab8-c906a33b126a", 00:07:55.723 "is_configured": true, 00:07:55.723 "data_offset": 2048, 00:07:55.723 "data_size": 63488 00:07:55.723 }, 00:07:55.723 { 00:07:55.723 "name": "BaseBdev2", 00:07:55.723 "uuid": "34204503-04a0-4937-9f9b-d9fc40b2fb8f", 00:07:55.723 "is_configured": true, 00:07:55.723 "data_offset": 2048, 00:07:55.723 "data_size": 63488 00:07:55.723 } 00:07:55.723 ] 00:07:55.723 }' 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.723 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.984 [2024-11-20 13:21:37.550285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.984 "name": "Existed_Raid", 00:07:55.984 "aliases": [ 00:07:55.984 "625f9e21-1f25-4b45-9d47-5c06ee3f2951" 00:07:55.984 ], 00:07:55.984 "product_name": "Raid Volume", 00:07:55.984 "block_size": 512, 00:07:55.984 "num_blocks": 63488, 00:07:55.984 "uuid": "625f9e21-1f25-4b45-9d47-5c06ee3f2951", 00:07:55.984 "assigned_rate_limits": { 00:07:55.984 "rw_ios_per_sec": 0, 00:07:55.984 "rw_mbytes_per_sec": 0, 00:07:55.984 "r_mbytes_per_sec": 0, 00:07:55.984 "w_mbytes_per_sec": 0 00:07:55.984 }, 00:07:55.984 "claimed": false, 00:07:55.984 "zoned": false, 00:07:55.984 "supported_io_types": { 00:07:55.984 "read": true, 00:07:55.984 "write": true, 00:07:55.984 "unmap": false, 00:07:55.984 "flush": false, 00:07:55.984 "reset": true, 00:07:55.984 "nvme_admin": false, 00:07:55.984 "nvme_io": false, 00:07:55.984 "nvme_io_md": false, 00:07:55.984 "write_zeroes": true, 00:07:55.984 "zcopy": false, 00:07:55.984 "get_zone_info": false, 00:07:55.984 "zone_management": false, 00:07:55.984 "zone_append": false, 00:07:55.984 "compare": false, 00:07:55.984 "compare_and_write": false, 00:07:55.984 "abort": false, 00:07:55.984 "seek_hole": false, 00:07:55.984 "seek_data": false, 00:07:55.984 "copy": false, 00:07:55.984 "nvme_iov_md": false 00:07:55.984 }, 00:07:55.984 "memory_domains": [ 00:07:55.984 { 00:07:55.984 "dma_device_id": "system", 00:07:55.984 "dma_device_type": 1 00:07:55.984 }, 00:07:55.984 { 00:07:55.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.984 "dma_device_type": 2 00:07:55.984 }, 00:07:55.984 { 00:07:55.984 "dma_device_id": "system", 00:07:55.984 "dma_device_type": 1 00:07:55.984 }, 00:07:55.984 { 00:07:55.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.984 "dma_device_type": 2 00:07:55.984 } 00:07:55.984 ], 00:07:55.984 "driver_specific": { 00:07:55.984 "raid": { 00:07:55.984 "uuid": "625f9e21-1f25-4b45-9d47-5c06ee3f2951", 00:07:55.984 "strip_size_kb": 0, 00:07:55.984 "state": "online", 00:07:55.984 "raid_level": "raid1", 00:07:55.984 "superblock": true, 00:07:55.984 "num_base_bdevs": 2, 00:07:55.984 "num_base_bdevs_discovered": 2, 00:07:55.984 "num_base_bdevs_operational": 2, 00:07:55.984 "base_bdevs_list": [ 00:07:55.984 { 00:07:55.984 "name": "BaseBdev1", 00:07:55.984 "uuid": "3dec1690-da31-4386-bab8-c906a33b126a", 00:07:55.984 "is_configured": true, 00:07:55.984 "data_offset": 2048, 00:07:55.984 "data_size": 63488 00:07:55.984 }, 00:07:55.984 { 00:07:55.984 "name": "BaseBdev2", 00:07:55.984 "uuid": "34204503-04a0-4937-9f9b-d9fc40b2fb8f", 00:07:55.984 "is_configured": true, 00:07:55.984 "data_offset": 2048, 00:07:55.984 "data_size": 63488 00:07:55.984 } 00:07:55.984 ] 00:07:55.984 } 00:07:55.984 } 00:07:55.984 }' 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:55.984 BaseBdev2' 00:07:55.984 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.245 [2024-11-20 13:21:37.793659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.245 "name": "Existed_Raid", 00:07:56.245 "uuid": "625f9e21-1f25-4b45-9d47-5c06ee3f2951", 00:07:56.245 "strip_size_kb": 0, 00:07:56.245 "state": "online", 00:07:56.245 "raid_level": "raid1", 00:07:56.245 "superblock": true, 00:07:56.245 "num_base_bdevs": 2, 00:07:56.245 "num_base_bdevs_discovered": 1, 00:07:56.245 "num_base_bdevs_operational": 1, 00:07:56.245 "base_bdevs_list": [ 00:07:56.245 { 00:07:56.245 "name": null, 00:07:56.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.245 "is_configured": false, 00:07:56.245 "data_offset": 0, 00:07:56.245 "data_size": 63488 00:07:56.245 }, 00:07:56.245 { 00:07:56.245 "name": "BaseBdev2", 00:07:56.245 "uuid": "34204503-04a0-4937-9f9b-d9fc40b2fb8f", 00:07:56.245 "is_configured": true, 00:07:56.245 "data_offset": 2048, 00:07:56.245 "data_size": 63488 00:07:56.245 } 00:07:56.245 ] 00:07:56.245 }' 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.245 13:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.813 [2024-11-20 13:21:38.272340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:56.813 [2024-11-20 13:21:38.272493] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.813 [2024-11-20 13:21:38.284371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.813 [2024-11-20 13:21:38.284504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.813 [2024-11-20 13:21:38.284547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73923 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73923 ']' 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73923 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73923 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.813 killing process with pid 73923 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73923' 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73923 00:07:56.813 [2024-11-20 13:21:38.360670] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.813 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73923 00:07:56.813 [2024-11-20 13:21:38.361671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.072 13:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:57.072 00:07:57.072 real 0m3.790s 00:07:57.072 user 0m6.022s 00:07:57.072 sys 0m0.734s 00:07:57.072 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.072 13:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.072 ************************************ 00:07:57.072 END TEST raid_state_function_test_sb 00:07:57.072 ************************************ 00:07:57.072 13:21:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:57.072 13:21:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:57.072 13:21:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.072 13:21:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.072 ************************************ 00:07:57.072 START TEST raid_superblock_test 00:07:57.072 ************************************ 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74159 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74159 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74159 ']' 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.072 13:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.072 [2024-11-20 13:21:38.716801] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:57.072 [2024-11-20 13:21:38.717032] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74159 ] 00:07:57.331 [2024-11-20 13:21:38.872964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.331 [2024-11-20 13:21:38.899466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.331 [2024-11-20 13:21:38.942115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.331 [2024-11-20 13:21:38.942213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.899 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.160 malloc1 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.160 [2024-11-20 13:21:39.576350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:58.160 [2024-11-20 13:21:39.576487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.160 [2024-11-20 13:21:39.576514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:58.160 [2024-11-20 13:21:39.576530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.160 [2024-11-20 13:21:39.578874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.160 [2024-11-20 13:21:39.578913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:58.160 pt1 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.160 malloc2 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.160 [2024-11-20 13:21:39.605220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.160 [2024-11-20 13:21:39.605318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.160 [2024-11-20 13:21:39.605369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:58.160 [2024-11-20 13:21:39.605399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.160 [2024-11-20 13:21:39.607503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.160 [2024-11-20 13:21:39.607573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.160 pt2 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.160 [2024-11-20 13:21:39.617292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:58.160 [2024-11-20 13:21:39.619281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.160 [2024-11-20 13:21:39.619507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:58.160 [2024-11-20 13:21:39.619555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.160 [2024-11-20 13:21:39.619893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:58.160 [2024-11-20 13:21:39.620102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:58.160 [2024-11-20 13:21:39.620147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:58.160 [2024-11-20 13:21:39.620340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.160 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.160 "name": "raid_bdev1", 00:07:58.160 "uuid": "0722effe-76e8-4393-9271-b9ea6022bd11", 00:07:58.160 "strip_size_kb": 0, 00:07:58.160 "state": "online", 00:07:58.160 "raid_level": "raid1", 00:07:58.160 "superblock": true, 00:07:58.160 "num_base_bdevs": 2, 00:07:58.160 "num_base_bdevs_discovered": 2, 00:07:58.160 "num_base_bdevs_operational": 2, 00:07:58.160 "base_bdevs_list": [ 00:07:58.160 { 00:07:58.160 "name": "pt1", 00:07:58.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.160 "is_configured": true, 00:07:58.160 "data_offset": 2048, 00:07:58.160 "data_size": 63488 00:07:58.160 }, 00:07:58.160 { 00:07:58.160 "name": "pt2", 00:07:58.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.160 "is_configured": true, 00:07:58.161 "data_offset": 2048, 00:07:58.161 "data_size": 63488 00:07:58.161 } 00:07:58.161 ] 00:07:58.161 }' 00:07:58.161 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.161 13:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.420 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:58.420 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:58.420 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.420 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.420 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.420 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.420 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.420 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.420 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.420 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.420 [2024-11-20 13:21:40.056838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.420 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.687 "name": "raid_bdev1", 00:07:58.687 "aliases": [ 00:07:58.687 "0722effe-76e8-4393-9271-b9ea6022bd11" 00:07:58.687 ], 00:07:58.687 "product_name": "Raid Volume", 00:07:58.687 "block_size": 512, 00:07:58.687 "num_blocks": 63488, 00:07:58.687 "uuid": "0722effe-76e8-4393-9271-b9ea6022bd11", 00:07:58.687 "assigned_rate_limits": { 00:07:58.687 "rw_ios_per_sec": 0, 00:07:58.687 "rw_mbytes_per_sec": 0, 00:07:58.687 "r_mbytes_per_sec": 0, 00:07:58.687 "w_mbytes_per_sec": 0 00:07:58.687 }, 00:07:58.687 "claimed": false, 00:07:58.687 "zoned": false, 00:07:58.687 "supported_io_types": { 00:07:58.687 "read": true, 00:07:58.687 "write": true, 00:07:58.687 "unmap": false, 00:07:58.687 "flush": false, 00:07:58.687 "reset": true, 00:07:58.687 "nvme_admin": false, 00:07:58.687 "nvme_io": false, 00:07:58.687 "nvme_io_md": false, 00:07:58.687 "write_zeroes": true, 00:07:58.687 "zcopy": false, 00:07:58.687 "get_zone_info": false, 00:07:58.687 "zone_management": false, 00:07:58.687 "zone_append": false, 00:07:58.687 "compare": false, 00:07:58.687 "compare_and_write": false, 00:07:58.687 "abort": false, 00:07:58.687 "seek_hole": false, 00:07:58.687 "seek_data": false, 00:07:58.687 "copy": false, 00:07:58.687 "nvme_iov_md": false 00:07:58.687 }, 00:07:58.687 "memory_domains": [ 00:07:58.687 { 00:07:58.687 "dma_device_id": "system", 00:07:58.687 "dma_device_type": 1 00:07:58.687 }, 00:07:58.687 { 00:07:58.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.687 "dma_device_type": 2 00:07:58.687 }, 00:07:58.687 { 00:07:58.687 "dma_device_id": "system", 00:07:58.687 "dma_device_type": 1 00:07:58.687 }, 00:07:58.687 { 00:07:58.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.687 "dma_device_type": 2 00:07:58.687 } 00:07:58.687 ], 00:07:58.687 "driver_specific": { 00:07:58.687 "raid": { 00:07:58.687 "uuid": "0722effe-76e8-4393-9271-b9ea6022bd11", 00:07:58.687 "strip_size_kb": 0, 00:07:58.687 "state": "online", 00:07:58.687 "raid_level": "raid1", 00:07:58.687 "superblock": true, 00:07:58.687 "num_base_bdevs": 2, 00:07:58.687 "num_base_bdevs_discovered": 2, 00:07:58.687 "num_base_bdevs_operational": 2, 00:07:58.687 "base_bdevs_list": [ 00:07:58.687 { 00:07:58.687 "name": "pt1", 00:07:58.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.687 "is_configured": true, 00:07:58.687 "data_offset": 2048, 00:07:58.687 "data_size": 63488 00:07:58.687 }, 00:07:58.687 { 00:07:58.687 "name": "pt2", 00:07:58.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.687 "is_configured": true, 00:07:58.687 "data_offset": 2048, 00:07:58.687 "data_size": 63488 00:07:58.687 } 00:07:58.687 ] 00:07:58.687 } 00:07:58.687 } 00:07:58.687 }' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:58.687 pt2' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.687 [2024-11-20 13:21:40.288361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0722effe-76e8-4393-9271-b9ea6022bd11 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0722effe-76e8-4393-9271-b9ea6022bd11 ']' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.687 [2024-11-20 13:21:40.332031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.687 [2024-11-20 13:21:40.332056] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.687 [2024-11-20 13:21:40.332140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.687 [2024-11-20 13:21:40.332206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.687 [2024-11-20 13:21:40.332215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:58.687 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.958 [2024-11-20 13:21:40.471786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:58.958 [2024-11-20 13:21:40.473818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:58.958 [2024-11-20 13:21:40.473883] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:58.958 [2024-11-20 13:21:40.473928] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:58.958 [2024-11-20 13:21:40.473944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.958 [2024-11-20 13:21:40.473954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:58.958 request: 00:07:58.958 { 00:07:58.958 "name": "raid_bdev1", 00:07:58.958 "raid_level": "raid1", 00:07:58.958 "base_bdevs": [ 00:07:58.958 "malloc1", 00:07:58.958 "malloc2" 00:07:58.958 ], 00:07:58.958 "superblock": false, 00:07:58.958 "method": "bdev_raid_create", 00:07:58.958 "req_id": 1 00:07:58.958 } 00:07:58.958 Got JSON-RPC error response 00:07:58.958 response: 00:07:58.958 { 00:07:58.958 "code": -17, 00:07:58.958 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:58.958 } 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:58.958 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.959 [2024-11-20 13:21:40.527661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:58.959 [2024-11-20 13:21:40.527717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.959 [2024-11-20 13:21:40.527736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:58.959 [2024-11-20 13:21:40.527745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.959 [2024-11-20 13:21:40.529902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.959 [2024-11-20 13:21:40.529939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:58.959 [2024-11-20 13:21:40.530019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:58.959 [2024-11-20 13:21:40.530078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:58.959 pt1 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.959 "name": "raid_bdev1", 00:07:58.959 "uuid": "0722effe-76e8-4393-9271-b9ea6022bd11", 00:07:58.959 "strip_size_kb": 0, 00:07:58.959 "state": "configuring", 00:07:58.959 "raid_level": "raid1", 00:07:58.959 "superblock": true, 00:07:58.959 "num_base_bdevs": 2, 00:07:58.959 "num_base_bdevs_discovered": 1, 00:07:58.959 "num_base_bdevs_operational": 2, 00:07:58.959 "base_bdevs_list": [ 00:07:58.959 { 00:07:58.959 "name": "pt1", 00:07:58.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.959 "is_configured": true, 00:07:58.959 "data_offset": 2048, 00:07:58.959 "data_size": 63488 00:07:58.959 }, 00:07:58.959 { 00:07:58.959 "name": null, 00:07:58.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.959 "is_configured": false, 00:07:58.959 "data_offset": 2048, 00:07:58.959 "data_size": 63488 00:07:58.959 } 00:07:58.959 ] 00:07:58.959 }' 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.959 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.527 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:59.527 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:59.527 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:59.527 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:59.527 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.527 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.527 [2024-11-20 13:21:40.978974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:59.527 [2024-11-20 13:21:40.979105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.527 [2024-11-20 13:21:40.979151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:59.527 [2024-11-20 13:21:40.979183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.527 [2024-11-20 13:21:40.979694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.527 [2024-11-20 13:21:40.979760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:59.527 [2024-11-20 13:21:40.979882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:59.527 [2024-11-20 13:21:40.979949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:59.527 [2024-11-20 13:21:40.980103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:59.527 [2024-11-20 13:21:40.980146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.527 [2024-11-20 13:21:40.980437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:59.527 [2024-11-20 13:21:40.980607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:59.527 [2024-11-20 13:21:40.980659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:59.527 [2024-11-20 13:21:40.980820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.527 pt2 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.528 13:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.528 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.528 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.528 "name": "raid_bdev1", 00:07:59.528 "uuid": "0722effe-76e8-4393-9271-b9ea6022bd11", 00:07:59.528 "strip_size_kb": 0, 00:07:59.528 "state": "online", 00:07:59.528 "raid_level": "raid1", 00:07:59.528 "superblock": true, 00:07:59.528 "num_base_bdevs": 2, 00:07:59.528 "num_base_bdevs_discovered": 2, 00:07:59.528 "num_base_bdevs_operational": 2, 00:07:59.528 "base_bdevs_list": [ 00:07:59.528 { 00:07:59.528 "name": "pt1", 00:07:59.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.528 "is_configured": true, 00:07:59.528 "data_offset": 2048, 00:07:59.528 "data_size": 63488 00:07:59.528 }, 00:07:59.528 { 00:07:59.528 "name": "pt2", 00:07:59.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.528 "is_configured": true, 00:07:59.528 "data_offset": 2048, 00:07:59.528 "data_size": 63488 00:07:59.528 } 00:07:59.528 ] 00:07:59.528 }' 00:07:59.528 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.528 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.787 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:59.787 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:59.787 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.787 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.787 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.787 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.787 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.787 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.787 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.787 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.787 [2024-11-20 13:21:41.418462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.787 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.046 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:00.046 "name": "raid_bdev1", 00:08:00.046 "aliases": [ 00:08:00.046 "0722effe-76e8-4393-9271-b9ea6022bd11" 00:08:00.046 ], 00:08:00.046 "product_name": "Raid Volume", 00:08:00.046 "block_size": 512, 00:08:00.046 "num_blocks": 63488, 00:08:00.046 "uuid": "0722effe-76e8-4393-9271-b9ea6022bd11", 00:08:00.046 "assigned_rate_limits": { 00:08:00.046 "rw_ios_per_sec": 0, 00:08:00.046 "rw_mbytes_per_sec": 0, 00:08:00.046 "r_mbytes_per_sec": 0, 00:08:00.046 "w_mbytes_per_sec": 0 00:08:00.046 }, 00:08:00.046 "claimed": false, 00:08:00.046 "zoned": false, 00:08:00.046 "supported_io_types": { 00:08:00.046 "read": true, 00:08:00.046 "write": true, 00:08:00.046 "unmap": false, 00:08:00.046 "flush": false, 00:08:00.046 "reset": true, 00:08:00.046 "nvme_admin": false, 00:08:00.046 "nvme_io": false, 00:08:00.046 "nvme_io_md": false, 00:08:00.046 "write_zeroes": true, 00:08:00.046 "zcopy": false, 00:08:00.046 "get_zone_info": false, 00:08:00.046 "zone_management": false, 00:08:00.046 "zone_append": false, 00:08:00.046 "compare": false, 00:08:00.046 "compare_and_write": false, 00:08:00.046 "abort": false, 00:08:00.046 "seek_hole": false, 00:08:00.046 "seek_data": false, 00:08:00.046 "copy": false, 00:08:00.046 "nvme_iov_md": false 00:08:00.046 }, 00:08:00.046 "memory_domains": [ 00:08:00.046 { 00:08:00.046 "dma_device_id": "system", 00:08:00.046 "dma_device_type": 1 00:08:00.046 }, 00:08:00.046 { 00:08:00.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.046 "dma_device_type": 2 00:08:00.046 }, 00:08:00.046 { 00:08:00.046 "dma_device_id": "system", 00:08:00.046 "dma_device_type": 1 00:08:00.046 }, 00:08:00.046 { 00:08:00.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.046 "dma_device_type": 2 00:08:00.046 } 00:08:00.046 ], 00:08:00.046 "driver_specific": { 00:08:00.046 "raid": { 00:08:00.046 "uuid": "0722effe-76e8-4393-9271-b9ea6022bd11", 00:08:00.046 "strip_size_kb": 0, 00:08:00.046 "state": "online", 00:08:00.046 "raid_level": "raid1", 00:08:00.046 "superblock": true, 00:08:00.046 "num_base_bdevs": 2, 00:08:00.046 "num_base_bdevs_discovered": 2, 00:08:00.046 "num_base_bdevs_operational": 2, 00:08:00.046 "base_bdevs_list": [ 00:08:00.046 { 00:08:00.046 "name": "pt1", 00:08:00.046 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:00.046 "is_configured": true, 00:08:00.046 "data_offset": 2048, 00:08:00.046 "data_size": 63488 00:08:00.046 }, 00:08:00.046 { 00:08:00.046 "name": "pt2", 00:08:00.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.046 "is_configured": true, 00:08:00.046 "data_offset": 2048, 00:08:00.046 "data_size": 63488 00:08:00.046 } 00:08:00.046 ] 00:08:00.046 } 00:08:00.046 } 00:08:00.046 }' 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:00.047 pt2' 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.047 [2024-11-20 13:21:41.662041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0722effe-76e8-4393-9271-b9ea6022bd11 '!=' 0722effe-76e8-4393-9271-b9ea6022bd11 ']' 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.047 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.047 [2024-11-20 13:21:41.709729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:00.306 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.306 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:00.306 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.306 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.306 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.306 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.307 "name": "raid_bdev1", 00:08:00.307 "uuid": "0722effe-76e8-4393-9271-b9ea6022bd11", 00:08:00.307 "strip_size_kb": 0, 00:08:00.307 "state": "online", 00:08:00.307 "raid_level": "raid1", 00:08:00.307 "superblock": true, 00:08:00.307 "num_base_bdevs": 2, 00:08:00.307 "num_base_bdevs_discovered": 1, 00:08:00.307 "num_base_bdevs_operational": 1, 00:08:00.307 "base_bdevs_list": [ 00:08:00.307 { 00:08:00.307 "name": null, 00:08:00.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.307 "is_configured": false, 00:08:00.307 "data_offset": 0, 00:08:00.307 "data_size": 63488 00:08:00.307 }, 00:08:00.307 { 00:08:00.307 "name": "pt2", 00:08:00.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.307 "is_configured": true, 00:08:00.307 "data_offset": 2048, 00:08:00.307 "data_size": 63488 00:08:00.307 } 00:08:00.307 ] 00:08:00.307 }' 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.307 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.567 [2024-11-20 13:21:42.117033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.567 [2024-11-20 13:21:42.117115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.567 [2024-11-20 13:21:42.117234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.567 [2024-11-20 13:21:42.117326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.567 [2024-11-20 13:21:42.117377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.567 [2024-11-20 13:21:42.188855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:00.567 [2024-11-20 13:21:42.188917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.567 [2024-11-20 13:21:42.188938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:00.567 [2024-11-20 13:21:42.188947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.567 [2024-11-20 13:21:42.191152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.567 [2024-11-20 13:21:42.191229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:00.567 [2024-11-20 13:21:42.191314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:00.567 [2024-11-20 13:21:42.191346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:00.567 [2024-11-20 13:21:42.191425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:00.567 [2024-11-20 13:21:42.191433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:00.567 [2024-11-20 13:21:42.191682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:00.567 [2024-11-20 13:21:42.191795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:00.567 [2024-11-20 13:21:42.191806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:00.567 [2024-11-20 13:21:42.191910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.567 pt2 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.567 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.827 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.827 "name": "raid_bdev1", 00:08:00.827 "uuid": "0722effe-76e8-4393-9271-b9ea6022bd11", 00:08:00.827 "strip_size_kb": 0, 00:08:00.827 "state": "online", 00:08:00.827 "raid_level": "raid1", 00:08:00.827 "superblock": true, 00:08:00.827 "num_base_bdevs": 2, 00:08:00.827 "num_base_bdevs_discovered": 1, 00:08:00.827 "num_base_bdevs_operational": 1, 00:08:00.827 "base_bdevs_list": [ 00:08:00.827 { 00:08:00.827 "name": null, 00:08:00.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.827 "is_configured": false, 00:08:00.827 "data_offset": 2048, 00:08:00.827 "data_size": 63488 00:08:00.827 }, 00:08:00.827 { 00:08:00.827 "name": "pt2", 00:08:00.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.827 "is_configured": true, 00:08:00.827 "data_offset": 2048, 00:08:00.827 "data_size": 63488 00:08:00.827 } 00:08:00.827 ] 00:08:00.827 }' 00:08:00.827 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.827 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.087 [2024-11-20 13:21:42.640125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.087 [2024-11-20 13:21:42.640213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.087 [2024-11-20 13:21:42.640316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.087 [2024-11-20 13:21:42.640386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.087 [2024-11-20 13:21:42.640437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.087 [2024-11-20 13:21:42.703986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:01.087 [2024-11-20 13:21:42.704109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.087 [2024-11-20 13:21:42.704143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:08:01.087 [2024-11-20 13:21:42.704175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.087 [2024-11-20 13:21:42.706486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.087 [2024-11-20 13:21:42.706559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:01.087 [2024-11-20 13:21:42.706661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:01.087 [2024-11-20 13:21:42.706730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:01.087 [2024-11-20 13:21:42.706887] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:01.087 [2024-11-20 13:21:42.706948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.087 [2024-11-20 13:21:42.706988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:08:01.087 [2024-11-20 13:21:42.707064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:01.087 [2024-11-20 13:21:42.707180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:08:01.087 [2024-11-20 13:21:42.707220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:01.087 [2024-11-20 13:21:42.707466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:01.087 [2024-11-20 13:21:42.707630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:08:01.087 [2024-11-20 13:21:42.707674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:08:01.087 [2024-11-20 13:21:42.707826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.087 pt1 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.087 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.088 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.088 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.088 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.088 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.088 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.088 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.088 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.348 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.348 "name": "raid_bdev1", 00:08:01.348 "uuid": "0722effe-76e8-4393-9271-b9ea6022bd11", 00:08:01.348 "strip_size_kb": 0, 00:08:01.348 "state": "online", 00:08:01.348 "raid_level": "raid1", 00:08:01.348 "superblock": true, 00:08:01.348 "num_base_bdevs": 2, 00:08:01.348 "num_base_bdevs_discovered": 1, 00:08:01.348 "num_base_bdevs_operational": 1, 00:08:01.348 "base_bdevs_list": [ 00:08:01.348 { 00:08:01.348 "name": null, 00:08:01.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.348 "is_configured": false, 00:08:01.348 "data_offset": 2048, 00:08:01.348 "data_size": 63488 00:08:01.348 }, 00:08:01.348 { 00:08:01.348 "name": "pt2", 00:08:01.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.348 "is_configured": true, 00:08:01.348 "data_offset": 2048, 00:08:01.348 "data_size": 63488 00:08:01.348 } 00:08:01.348 ] 00:08:01.348 }' 00:08:01.348 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.348 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:01.609 [2024-11-20 13:21:43.179444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0722effe-76e8-4393-9271-b9ea6022bd11 '!=' 0722effe-76e8-4393-9271-b9ea6022bd11 ']' 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74159 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74159 ']' 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74159 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74159 00:08:01.609 killing process with pid 74159 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74159' 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74159 00:08:01.609 [2024-11-20 13:21:43.262780] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:01.609 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74159 00:08:01.609 [2024-11-20 13:21:43.262878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.609 [2024-11-20 13:21:43.262934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.609 [2024-11-20 13:21:43.262944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:08:01.869 [2024-11-20 13:21:43.286349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.869 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:01.869 00:08:01.869 real 0m4.867s 00:08:01.869 user 0m8.023s 00:08:01.869 sys 0m0.965s 00:08:01.869 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.869 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.869 ************************************ 00:08:01.869 END TEST raid_superblock_test 00:08:01.869 ************************************ 00:08:02.129 13:21:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:02.129 13:21:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:02.129 13:21:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.129 13:21:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.129 ************************************ 00:08:02.129 START TEST raid_read_error_test 00:08:02.129 ************************************ 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tkQ5ACbW1t 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74478 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74478 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74478 ']' 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.129 13:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.129 [2024-11-20 13:21:43.666936] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:02.129 [2024-11-20 13:21:43.667171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74478 ] 00:08:02.388 [2024-11-20 13:21:43.821394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.388 [2024-11-20 13:21:43.846938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.388 [2024-11-20 13:21:43.889496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.388 [2024-11-20 13:21:43.889610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.957 BaseBdev1_malloc 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.957 true 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.957 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.957 [2024-11-20 13:21:44.535856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:02.957 [2024-11-20 13:21:44.535999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.958 [2024-11-20 13:21:44.536044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:02.958 [2024-11-20 13:21:44.536078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.958 [2024-11-20 13:21:44.538275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.958 [2024-11-20 13:21:44.538343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:02.958 BaseBdev1 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.958 BaseBdev2_malloc 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.958 true 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.958 [2024-11-20 13:21:44.576568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:02.958 [2024-11-20 13:21:44.576662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.958 [2024-11-20 13:21:44.576715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:02.958 [2024-11-20 13:21:44.576754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.958 [2024-11-20 13:21:44.578869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.958 [2024-11-20 13:21:44.578942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:02.958 BaseBdev2 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.958 [2024-11-20 13:21:44.588601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.958 [2024-11-20 13:21:44.590486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.958 [2024-11-20 13:21:44.590682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:02.958 [2024-11-20 13:21:44.590695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:02.958 [2024-11-20 13:21:44.590930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:02.958 [2024-11-20 13:21:44.591069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:02.958 [2024-11-20 13:21:44.591083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:02.958 [2024-11-20 13:21:44.591232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.958 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.218 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.218 "name": "raid_bdev1", 00:08:03.218 "uuid": "0cbbcddd-91f9-4ed1-8d96-13df038654a8", 00:08:03.218 "strip_size_kb": 0, 00:08:03.218 "state": "online", 00:08:03.218 "raid_level": "raid1", 00:08:03.218 "superblock": true, 00:08:03.218 "num_base_bdevs": 2, 00:08:03.218 "num_base_bdevs_discovered": 2, 00:08:03.218 "num_base_bdevs_operational": 2, 00:08:03.218 "base_bdevs_list": [ 00:08:03.218 { 00:08:03.218 "name": "BaseBdev1", 00:08:03.218 "uuid": "cab5ecf2-1415-5a7c-9b3f-2fb3161e4b15", 00:08:03.218 "is_configured": true, 00:08:03.218 "data_offset": 2048, 00:08:03.218 "data_size": 63488 00:08:03.218 }, 00:08:03.218 { 00:08:03.218 "name": "BaseBdev2", 00:08:03.218 "uuid": "98d1c6c5-dd77-5717-bca8-59551482d757", 00:08:03.218 "is_configured": true, 00:08:03.218 "data_offset": 2048, 00:08:03.218 "data_size": 63488 00:08:03.218 } 00:08:03.218 ] 00:08:03.218 }' 00:08:03.218 13:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.218 13:21:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.477 13:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:03.477 13:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:03.736 [2024-11-20 13:21:45.148087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.673 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.674 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.674 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.674 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.674 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.674 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.674 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.674 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.674 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.674 "name": "raid_bdev1", 00:08:04.674 "uuid": "0cbbcddd-91f9-4ed1-8d96-13df038654a8", 00:08:04.674 "strip_size_kb": 0, 00:08:04.674 "state": "online", 00:08:04.674 "raid_level": "raid1", 00:08:04.674 "superblock": true, 00:08:04.674 "num_base_bdevs": 2, 00:08:04.674 "num_base_bdevs_discovered": 2, 00:08:04.674 "num_base_bdevs_operational": 2, 00:08:04.674 "base_bdevs_list": [ 00:08:04.674 { 00:08:04.674 "name": "BaseBdev1", 00:08:04.674 "uuid": "cab5ecf2-1415-5a7c-9b3f-2fb3161e4b15", 00:08:04.674 "is_configured": true, 00:08:04.674 "data_offset": 2048, 00:08:04.674 "data_size": 63488 00:08:04.674 }, 00:08:04.674 { 00:08:04.674 "name": "BaseBdev2", 00:08:04.674 "uuid": "98d1c6c5-dd77-5717-bca8-59551482d757", 00:08:04.674 "is_configured": true, 00:08:04.674 "data_offset": 2048, 00:08:04.674 "data_size": 63488 00:08:04.674 } 00:08:04.674 ] 00:08:04.674 }' 00:08:04.674 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.674 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.933 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.933 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.933 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.933 [2024-11-20 13:21:46.560087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.933 [2024-11-20 13:21:46.560172] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.933 [2024-11-20 13:21:46.562839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.933 [2024-11-20 13:21:46.562933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.933 [2024-11-20 13:21:46.563051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.933 [2024-11-20 13:21:46.563097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:04.933 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.933 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74478 00:08:04.933 { 00:08:04.933 "results": [ 00:08:04.933 { 00:08:04.933 "job": "raid_bdev1", 00:08:04.933 "core_mask": "0x1", 00:08:04.933 "workload": "randrw", 00:08:04.933 "percentage": 50, 00:08:04.933 "status": "finished", 00:08:04.933 "queue_depth": 1, 00:08:04.933 "io_size": 131072, 00:08:04.933 "runtime": 1.412897, 00:08:04.933 "iops": 19159.216843124446, 00:08:04.933 "mibps": 2394.902105390556, 00:08:04.933 "io_failed": 0, 00:08:04.933 "io_timeout": 0, 00:08:04.933 "avg_latency_us": 49.58728833381997, 00:08:04.933 "min_latency_us": 23.36419213973799, 00:08:04.933 "max_latency_us": 1466.6899563318777 00:08:04.933 } 00:08:04.933 ], 00:08:04.933 "core_count": 1 00:08:04.933 } 00:08:04.933 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74478 ']' 00:08:04.933 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74478 00:08:04.933 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:04.933 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.933 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74478 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.192 killing process with pid 74478 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74478' 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74478 00:08:05.192 [2024-11-20 13:21:46.606633] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74478 00:08:05.192 [2024-11-20 13:21:46.622705] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tkQ5ACbW1t 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:05.192 ************************************ 00:08:05.192 END TEST raid_read_error_test 00:08:05.192 ************************************ 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:05.192 13:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:05.192 00:08:05.192 real 0m3.262s 00:08:05.192 user 0m4.256s 00:08:05.192 sys 0m0.476s 00:08:05.193 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.193 13:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.452 13:21:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:05.452 13:21:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:05.452 13:21:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.452 13:21:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:05.452 ************************************ 00:08:05.452 START TEST raid_write_error_test 00:08:05.452 ************************************ 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DmbvVZmvF5 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74607 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74607 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74607 ']' 00:08:05.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.452 13:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.452 [2024-11-20 13:21:46.988158] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:05.452 [2024-11-20 13:21:46.988275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74607 ] 00:08:05.711 [2024-11-20 13:21:47.141233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.711 [2024-11-20 13:21:47.166541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.711 [2024-11-20 13:21:47.209499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.711 [2024-11-20 13:21:47.209533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.279 BaseBdev1_malloc 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.279 true 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.279 [2024-11-20 13:21:47.839674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:06.279 [2024-11-20 13:21:47.839728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.279 [2024-11-20 13:21:47.839754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:06.279 [2024-11-20 13:21:47.839763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.279 [2024-11-20 13:21:47.841982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.279 [2024-11-20 13:21:47.842048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:06.279 BaseBdev1 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.279 BaseBdev2_malloc 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.279 true 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.279 [2024-11-20 13:21:47.868288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:06.279 [2024-11-20 13:21:47.868333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.279 [2024-11-20 13:21:47.868351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:06.279 [2024-11-20 13:21:47.868367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.279 [2024-11-20 13:21:47.870392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.279 [2024-11-20 13:21:47.870483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:06.279 BaseBdev2 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.279 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.279 [2024-11-20 13:21:47.876331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.279 [2024-11-20 13:21:47.878186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.279 [2024-11-20 13:21:47.878367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:06.279 [2024-11-20 13:21:47.878384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:06.279 [2024-11-20 13:21:47.878652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:06.279 [2024-11-20 13:21:47.878771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:06.280 [2024-11-20 13:21:47.878782] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:06.280 [2024-11-20 13:21:47.878909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.280 "name": "raid_bdev1", 00:08:06.280 "uuid": "7ea413fe-5b51-49cc-af6d-385bfd5975cc", 00:08:06.280 "strip_size_kb": 0, 00:08:06.280 "state": "online", 00:08:06.280 "raid_level": "raid1", 00:08:06.280 "superblock": true, 00:08:06.280 "num_base_bdevs": 2, 00:08:06.280 "num_base_bdevs_discovered": 2, 00:08:06.280 "num_base_bdevs_operational": 2, 00:08:06.280 "base_bdevs_list": [ 00:08:06.280 { 00:08:06.280 "name": "BaseBdev1", 00:08:06.280 "uuid": "52fe75d9-f576-5fad-986c-b17f72b7c605", 00:08:06.280 "is_configured": true, 00:08:06.280 "data_offset": 2048, 00:08:06.280 "data_size": 63488 00:08:06.280 }, 00:08:06.280 { 00:08:06.280 "name": "BaseBdev2", 00:08:06.280 "uuid": "acdda51c-620a-56b9-ac91-ac956c765ecf", 00:08:06.280 "is_configured": true, 00:08:06.280 "data_offset": 2048, 00:08:06.280 "data_size": 63488 00:08:06.280 } 00:08:06.280 ] 00:08:06.280 }' 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.280 13:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.848 13:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:06.849 13:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:06.849 [2024-11-20 13:21:48.419914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.787 [2024-11-20 13:21:49.345344] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:07.787 [2024-11-20 13:21:49.345466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:07.787 [2024-11-20 13:21:49.345701] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002a10 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.787 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.788 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.788 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.788 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.788 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.788 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.788 "name": "raid_bdev1", 00:08:07.788 "uuid": "7ea413fe-5b51-49cc-af6d-385bfd5975cc", 00:08:07.788 "strip_size_kb": 0, 00:08:07.788 "state": "online", 00:08:07.788 "raid_level": "raid1", 00:08:07.788 "superblock": true, 00:08:07.788 "num_base_bdevs": 2, 00:08:07.788 "num_base_bdevs_discovered": 1, 00:08:07.788 "num_base_bdevs_operational": 1, 00:08:07.788 "base_bdevs_list": [ 00:08:07.788 { 00:08:07.788 "name": null, 00:08:07.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.788 "is_configured": false, 00:08:07.788 "data_offset": 0, 00:08:07.788 "data_size": 63488 00:08:07.788 }, 00:08:07.788 { 00:08:07.788 "name": "BaseBdev2", 00:08:07.788 "uuid": "acdda51c-620a-56b9-ac91-ac956c765ecf", 00:08:07.788 "is_configured": true, 00:08:07.788 "data_offset": 2048, 00:08:07.788 "data_size": 63488 00:08:07.788 } 00:08:07.788 ] 00:08:07.788 }' 00:08:07.788 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.788 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.372 [2024-11-20 13:21:49.754810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.372 [2024-11-20 13:21:49.754907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.372 [2024-11-20 13:21:49.757607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.372 [2024-11-20 13:21:49.757709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.372 [2024-11-20 13:21:49.757780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.372 [2024-11-20 13:21:49.757826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:08.372 { 00:08:08.372 "results": [ 00:08:08.372 { 00:08:08.372 "job": "raid_bdev1", 00:08:08.372 "core_mask": "0x1", 00:08:08.372 "workload": "randrw", 00:08:08.372 "percentage": 50, 00:08:08.372 "status": "finished", 00:08:08.372 "queue_depth": 1, 00:08:08.372 "io_size": 131072, 00:08:08.372 "runtime": 1.335467, 00:08:08.372 "iops": 21674.814877492292, 00:08:08.372 "mibps": 2709.3518596865365, 00:08:08.372 "io_failed": 0, 00:08:08.372 "io_timeout": 0, 00:08:08.372 "avg_latency_us": 43.53606489662878, 00:08:08.372 "min_latency_us": 22.022707423580787, 00:08:08.372 "max_latency_us": 1452.380786026201 00:08:08.372 } 00:08:08.372 ], 00:08:08.372 "core_count": 1 00:08:08.372 } 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74607 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74607 ']' 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74607 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74607 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.372 killing process with pid 74607 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74607' 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74607 00:08:08.372 [2024-11-20 13:21:49.792512] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.372 13:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74607 00:08:08.372 [2024-11-20 13:21:49.808352] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.372 13:21:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DmbvVZmvF5 00:08:08.372 13:21:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:08.372 13:21:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:08.372 13:21:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:08.372 13:21:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:08.372 13:21:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.372 13:21:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:08.372 13:21:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:08.372 00:08:08.372 real 0m3.130s 00:08:08.372 user 0m3.990s 00:08:08.372 sys 0m0.478s 00:08:08.372 13:21:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.372 13:21:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.372 ************************************ 00:08:08.372 END TEST raid_write_error_test 00:08:08.372 ************************************ 00:08:08.631 13:21:50 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:08.631 13:21:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:08.631 13:21:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:08.631 13:21:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.631 13:21:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.631 13:21:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.631 ************************************ 00:08:08.631 START TEST raid_state_function_test 00:08:08.631 ************************************ 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:08.631 Process raid pid: 74734 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74734 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74734' 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74734 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74734 ']' 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.631 13:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.631 [2024-11-20 13:21:50.184151] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:08.631 [2024-11-20 13:21:50.184400] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.890 [2024-11-20 13:21:50.344319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.890 [2024-11-20 13:21:50.372046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.890 [2024-11-20 13:21:50.415199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.890 [2024-11-20 13:21:50.415237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.458 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.458 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:09.458 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.458 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.458 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.458 [2024-11-20 13:21:51.040868] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.458 [2024-11-20 13:21:51.040968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.458 [2024-11-20 13:21:51.041000] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.458 [2024-11-20 13:21:51.041011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.458 [2024-11-20 13:21:51.041034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.458 [2024-11-20 13:21:51.041044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.458 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.458 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.458 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.458 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.458 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.458 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.459 "name": "Existed_Raid", 00:08:09.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.459 "strip_size_kb": 64, 00:08:09.459 "state": "configuring", 00:08:09.459 "raid_level": "raid0", 00:08:09.459 "superblock": false, 00:08:09.459 "num_base_bdevs": 3, 00:08:09.459 "num_base_bdevs_discovered": 0, 00:08:09.459 "num_base_bdevs_operational": 3, 00:08:09.459 "base_bdevs_list": [ 00:08:09.459 { 00:08:09.459 "name": "BaseBdev1", 00:08:09.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.459 "is_configured": false, 00:08:09.459 "data_offset": 0, 00:08:09.459 "data_size": 0 00:08:09.459 }, 00:08:09.459 { 00:08:09.459 "name": "BaseBdev2", 00:08:09.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.459 "is_configured": false, 00:08:09.459 "data_offset": 0, 00:08:09.459 "data_size": 0 00:08:09.459 }, 00:08:09.459 { 00:08:09.459 "name": "BaseBdev3", 00:08:09.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.459 "is_configured": false, 00:08:09.459 "data_offset": 0, 00:08:09.459 "data_size": 0 00:08:09.459 } 00:08:09.459 ] 00:08:09.459 }' 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.459 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.028 [2024-11-20 13:21:51.500031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.028 [2024-11-20 13:21:51.500126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.028 [2024-11-20 13:21:51.512025] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.028 [2024-11-20 13:21:51.512107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.028 [2024-11-20 13:21:51.512135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.028 [2024-11-20 13:21:51.512158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.028 [2024-11-20 13:21:51.512177] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.028 [2024-11-20 13:21:51.512197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.028 [2024-11-20 13:21:51.533383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.028 BaseBdev1 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.028 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.028 [ 00:08:10.028 { 00:08:10.028 "name": "BaseBdev1", 00:08:10.028 "aliases": [ 00:08:10.028 "894397db-e180-4b08-9169-79dd988e769d" 00:08:10.028 ], 00:08:10.028 "product_name": "Malloc disk", 00:08:10.028 "block_size": 512, 00:08:10.028 "num_blocks": 65536, 00:08:10.028 "uuid": "894397db-e180-4b08-9169-79dd988e769d", 00:08:10.028 "assigned_rate_limits": { 00:08:10.028 "rw_ios_per_sec": 0, 00:08:10.028 "rw_mbytes_per_sec": 0, 00:08:10.028 "r_mbytes_per_sec": 0, 00:08:10.028 "w_mbytes_per_sec": 0 00:08:10.028 }, 00:08:10.028 "claimed": true, 00:08:10.028 "claim_type": "exclusive_write", 00:08:10.028 "zoned": false, 00:08:10.028 "supported_io_types": { 00:08:10.028 "read": true, 00:08:10.028 "write": true, 00:08:10.028 "unmap": true, 00:08:10.028 "flush": true, 00:08:10.028 "reset": true, 00:08:10.028 "nvme_admin": false, 00:08:10.028 "nvme_io": false, 00:08:10.028 "nvme_io_md": false, 00:08:10.029 "write_zeroes": true, 00:08:10.029 "zcopy": true, 00:08:10.029 "get_zone_info": false, 00:08:10.029 "zone_management": false, 00:08:10.029 "zone_append": false, 00:08:10.029 "compare": false, 00:08:10.029 "compare_and_write": false, 00:08:10.029 "abort": true, 00:08:10.029 "seek_hole": false, 00:08:10.029 "seek_data": false, 00:08:10.029 "copy": true, 00:08:10.029 "nvme_iov_md": false 00:08:10.029 }, 00:08:10.029 "memory_domains": [ 00:08:10.029 { 00:08:10.029 "dma_device_id": "system", 00:08:10.029 "dma_device_type": 1 00:08:10.029 }, 00:08:10.029 { 00:08:10.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.029 "dma_device_type": 2 00:08:10.029 } 00:08:10.029 ], 00:08:10.029 "driver_specific": {} 00:08:10.029 } 00:08:10.029 ] 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.029 "name": "Existed_Raid", 00:08:10.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.029 "strip_size_kb": 64, 00:08:10.029 "state": "configuring", 00:08:10.029 "raid_level": "raid0", 00:08:10.029 "superblock": false, 00:08:10.029 "num_base_bdevs": 3, 00:08:10.029 "num_base_bdevs_discovered": 1, 00:08:10.029 "num_base_bdevs_operational": 3, 00:08:10.029 "base_bdevs_list": [ 00:08:10.029 { 00:08:10.029 "name": "BaseBdev1", 00:08:10.029 "uuid": "894397db-e180-4b08-9169-79dd988e769d", 00:08:10.029 "is_configured": true, 00:08:10.029 "data_offset": 0, 00:08:10.029 "data_size": 65536 00:08:10.029 }, 00:08:10.029 { 00:08:10.029 "name": "BaseBdev2", 00:08:10.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.029 "is_configured": false, 00:08:10.029 "data_offset": 0, 00:08:10.029 "data_size": 0 00:08:10.029 }, 00:08:10.029 { 00:08:10.029 "name": "BaseBdev3", 00:08:10.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.029 "is_configured": false, 00:08:10.029 "data_offset": 0, 00:08:10.029 "data_size": 0 00:08:10.029 } 00:08:10.029 ] 00:08:10.029 }' 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.029 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.289 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.289 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.289 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.289 [2024-11-20 13:21:51.952729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.289 [2024-11-20 13:21:51.952793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.549 [2024-11-20 13:21:51.964744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.549 [2024-11-20 13:21:51.966797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.549 [2024-11-20 13:21:51.966839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.549 [2024-11-20 13:21:51.966849] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.549 [2024-11-20 13:21:51.966859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.549 13:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.549 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.549 "name": "Existed_Raid", 00:08:10.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.549 "strip_size_kb": 64, 00:08:10.549 "state": "configuring", 00:08:10.549 "raid_level": "raid0", 00:08:10.549 "superblock": false, 00:08:10.549 "num_base_bdevs": 3, 00:08:10.549 "num_base_bdevs_discovered": 1, 00:08:10.549 "num_base_bdevs_operational": 3, 00:08:10.549 "base_bdevs_list": [ 00:08:10.549 { 00:08:10.549 "name": "BaseBdev1", 00:08:10.549 "uuid": "894397db-e180-4b08-9169-79dd988e769d", 00:08:10.549 "is_configured": true, 00:08:10.549 "data_offset": 0, 00:08:10.549 "data_size": 65536 00:08:10.549 }, 00:08:10.549 { 00:08:10.549 "name": "BaseBdev2", 00:08:10.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.549 "is_configured": false, 00:08:10.549 "data_offset": 0, 00:08:10.549 "data_size": 0 00:08:10.549 }, 00:08:10.549 { 00:08:10.549 "name": "BaseBdev3", 00:08:10.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.549 "is_configured": false, 00:08:10.549 "data_offset": 0, 00:08:10.549 "data_size": 0 00:08:10.549 } 00:08:10.549 ] 00:08:10.549 }' 00:08:10.549 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.549 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.809 [2024-11-20 13:21:52.387055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.809 BaseBdev2 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.809 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.809 [ 00:08:10.809 { 00:08:10.809 "name": "BaseBdev2", 00:08:10.809 "aliases": [ 00:08:10.809 "3f8a7f14-30b9-428d-9561-1f5ffed3912a" 00:08:10.809 ], 00:08:10.809 "product_name": "Malloc disk", 00:08:10.809 "block_size": 512, 00:08:10.809 "num_blocks": 65536, 00:08:10.809 "uuid": "3f8a7f14-30b9-428d-9561-1f5ffed3912a", 00:08:10.809 "assigned_rate_limits": { 00:08:10.809 "rw_ios_per_sec": 0, 00:08:10.809 "rw_mbytes_per_sec": 0, 00:08:10.809 "r_mbytes_per_sec": 0, 00:08:10.809 "w_mbytes_per_sec": 0 00:08:10.809 }, 00:08:10.809 "claimed": true, 00:08:10.809 "claim_type": "exclusive_write", 00:08:10.809 "zoned": false, 00:08:10.809 "supported_io_types": { 00:08:10.809 "read": true, 00:08:10.809 "write": true, 00:08:10.809 "unmap": true, 00:08:10.809 "flush": true, 00:08:10.809 "reset": true, 00:08:10.809 "nvme_admin": false, 00:08:10.809 "nvme_io": false, 00:08:10.809 "nvme_io_md": false, 00:08:10.809 "write_zeroes": true, 00:08:10.809 "zcopy": true, 00:08:10.809 "get_zone_info": false, 00:08:10.809 "zone_management": false, 00:08:10.809 "zone_append": false, 00:08:10.809 "compare": false, 00:08:10.809 "compare_and_write": false, 00:08:10.809 "abort": true, 00:08:10.809 "seek_hole": false, 00:08:10.809 "seek_data": false, 00:08:10.809 "copy": true, 00:08:10.809 "nvme_iov_md": false 00:08:10.809 }, 00:08:10.809 "memory_domains": [ 00:08:10.809 { 00:08:10.809 "dma_device_id": "system", 00:08:10.809 "dma_device_type": 1 00:08:10.809 }, 00:08:10.809 { 00:08:10.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.809 "dma_device_type": 2 00:08:10.809 } 00:08:10.810 ], 00:08:10.810 "driver_specific": {} 00:08:10.810 } 00:08:10.810 ] 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.810 "name": "Existed_Raid", 00:08:10.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.810 "strip_size_kb": 64, 00:08:10.810 "state": "configuring", 00:08:10.810 "raid_level": "raid0", 00:08:10.810 "superblock": false, 00:08:10.810 "num_base_bdevs": 3, 00:08:10.810 "num_base_bdevs_discovered": 2, 00:08:10.810 "num_base_bdevs_operational": 3, 00:08:10.810 "base_bdevs_list": [ 00:08:10.810 { 00:08:10.810 "name": "BaseBdev1", 00:08:10.810 "uuid": "894397db-e180-4b08-9169-79dd988e769d", 00:08:10.810 "is_configured": true, 00:08:10.810 "data_offset": 0, 00:08:10.810 "data_size": 65536 00:08:10.810 }, 00:08:10.810 { 00:08:10.810 "name": "BaseBdev2", 00:08:10.810 "uuid": "3f8a7f14-30b9-428d-9561-1f5ffed3912a", 00:08:10.810 "is_configured": true, 00:08:10.810 "data_offset": 0, 00:08:10.810 "data_size": 65536 00:08:10.810 }, 00:08:10.810 { 00:08:10.810 "name": "BaseBdev3", 00:08:10.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.810 "is_configured": false, 00:08:10.810 "data_offset": 0, 00:08:10.810 "data_size": 0 00:08:10.810 } 00:08:10.810 ] 00:08:10.810 }' 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.810 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.379 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.380 [2024-11-20 13:21:52.815681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.380 [2024-11-20 13:21:52.815826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:11.380 [2024-11-20 13:21:52.815865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:11.380 [2024-11-20 13:21:52.816264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:11.380 [2024-11-20 13:21:52.816503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:11.380 [2024-11-20 13:21:52.816557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:11.380 [2024-11-20 13:21:52.816855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.380 BaseBdev3 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.380 [ 00:08:11.380 { 00:08:11.380 "name": "BaseBdev3", 00:08:11.380 "aliases": [ 00:08:11.380 "0d763b3f-be15-446c-85b0-8b197f2fe2ce" 00:08:11.380 ], 00:08:11.380 "product_name": "Malloc disk", 00:08:11.380 "block_size": 512, 00:08:11.380 "num_blocks": 65536, 00:08:11.380 "uuid": "0d763b3f-be15-446c-85b0-8b197f2fe2ce", 00:08:11.380 "assigned_rate_limits": { 00:08:11.380 "rw_ios_per_sec": 0, 00:08:11.380 "rw_mbytes_per_sec": 0, 00:08:11.380 "r_mbytes_per_sec": 0, 00:08:11.380 "w_mbytes_per_sec": 0 00:08:11.380 }, 00:08:11.380 "claimed": true, 00:08:11.380 "claim_type": "exclusive_write", 00:08:11.380 "zoned": false, 00:08:11.380 "supported_io_types": { 00:08:11.380 "read": true, 00:08:11.380 "write": true, 00:08:11.380 "unmap": true, 00:08:11.380 "flush": true, 00:08:11.380 "reset": true, 00:08:11.380 "nvme_admin": false, 00:08:11.380 "nvme_io": false, 00:08:11.380 "nvme_io_md": false, 00:08:11.380 "write_zeroes": true, 00:08:11.380 "zcopy": true, 00:08:11.380 "get_zone_info": false, 00:08:11.380 "zone_management": false, 00:08:11.380 "zone_append": false, 00:08:11.380 "compare": false, 00:08:11.380 "compare_and_write": false, 00:08:11.380 "abort": true, 00:08:11.380 "seek_hole": false, 00:08:11.380 "seek_data": false, 00:08:11.380 "copy": true, 00:08:11.380 "nvme_iov_md": false 00:08:11.380 }, 00:08:11.380 "memory_domains": [ 00:08:11.380 { 00:08:11.380 "dma_device_id": "system", 00:08:11.380 "dma_device_type": 1 00:08:11.380 }, 00:08:11.380 { 00:08:11.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.380 "dma_device_type": 2 00:08:11.380 } 00:08:11.380 ], 00:08:11.380 "driver_specific": {} 00:08:11.380 } 00:08:11.380 ] 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.380 "name": "Existed_Raid", 00:08:11.380 "uuid": "d57caec0-2ef8-415e-b7d1-968d77ece882", 00:08:11.380 "strip_size_kb": 64, 00:08:11.380 "state": "online", 00:08:11.380 "raid_level": "raid0", 00:08:11.380 "superblock": false, 00:08:11.380 "num_base_bdevs": 3, 00:08:11.380 "num_base_bdevs_discovered": 3, 00:08:11.380 "num_base_bdevs_operational": 3, 00:08:11.380 "base_bdevs_list": [ 00:08:11.380 { 00:08:11.380 "name": "BaseBdev1", 00:08:11.380 "uuid": "894397db-e180-4b08-9169-79dd988e769d", 00:08:11.380 "is_configured": true, 00:08:11.380 "data_offset": 0, 00:08:11.380 "data_size": 65536 00:08:11.380 }, 00:08:11.380 { 00:08:11.380 "name": "BaseBdev2", 00:08:11.380 "uuid": "3f8a7f14-30b9-428d-9561-1f5ffed3912a", 00:08:11.380 "is_configured": true, 00:08:11.380 "data_offset": 0, 00:08:11.380 "data_size": 65536 00:08:11.380 }, 00:08:11.380 { 00:08:11.380 "name": "BaseBdev3", 00:08:11.380 "uuid": "0d763b3f-be15-446c-85b0-8b197f2fe2ce", 00:08:11.380 "is_configured": true, 00:08:11.380 "data_offset": 0, 00:08:11.380 "data_size": 65536 00:08:11.380 } 00:08:11.380 ] 00:08:11.380 }' 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.380 13:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.950 [2024-11-20 13:21:53.339161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.950 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.950 "name": "Existed_Raid", 00:08:11.950 "aliases": [ 00:08:11.950 "d57caec0-2ef8-415e-b7d1-968d77ece882" 00:08:11.950 ], 00:08:11.950 "product_name": "Raid Volume", 00:08:11.950 "block_size": 512, 00:08:11.950 "num_blocks": 196608, 00:08:11.950 "uuid": "d57caec0-2ef8-415e-b7d1-968d77ece882", 00:08:11.951 "assigned_rate_limits": { 00:08:11.951 "rw_ios_per_sec": 0, 00:08:11.951 "rw_mbytes_per_sec": 0, 00:08:11.951 "r_mbytes_per_sec": 0, 00:08:11.951 "w_mbytes_per_sec": 0 00:08:11.951 }, 00:08:11.951 "claimed": false, 00:08:11.951 "zoned": false, 00:08:11.951 "supported_io_types": { 00:08:11.951 "read": true, 00:08:11.951 "write": true, 00:08:11.951 "unmap": true, 00:08:11.951 "flush": true, 00:08:11.951 "reset": true, 00:08:11.951 "nvme_admin": false, 00:08:11.951 "nvme_io": false, 00:08:11.951 "nvme_io_md": false, 00:08:11.951 "write_zeroes": true, 00:08:11.951 "zcopy": false, 00:08:11.951 "get_zone_info": false, 00:08:11.951 "zone_management": false, 00:08:11.951 "zone_append": false, 00:08:11.951 "compare": false, 00:08:11.951 "compare_and_write": false, 00:08:11.951 "abort": false, 00:08:11.951 "seek_hole": false, 00:08:11.951 "seek_data": false, 00:08:11.951 "copy": false, 00:08:11.951 "nvme_iov_md": false 00:08:11.951 }, 00:08:11.951 "memory_domains": [ 00:08:11.951 { 00:08:11.951 "dma_device_id": "system", 00:08:11.951 "dma_device_type": 1 00:08:11.951 }, 00:08:11.951 { 00:08:11.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.951 "dma_device_type": 2 00:08:11.951 }, 00:08:11.951 { 00:08:11.951 "dma_device_id": "system", 00:08:11.951 "dma_device_type": 1 00:08:11.951 }, 00:08:11.951 { 00:08:11.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.951 "dma_device_type": 2 00:08:11.951 }, 00:08:11.951 { 00:08:11.951 "dma_device_id": "system", 00:08:11.951 "dma_device_type": 1 00:08:11.951 }, 00:08:11.951 { 00:08:11.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.951 "dma_device_type": 2 00:08:11.951 } 00:08:11.951 ], 00:08:11.951 "driver_specific": { 00:08:11.951 "raid": { 00:08:11.951 "uuid": "d57caec0-2ef8-415e-b7d1-968d77ece882", 00:08:11.951 "strip_size_kb": 64, 00:08:11.951 "state": "online", 00:08:11.951 "raid_level": "raid0", 00:08:11.951 "superblock": false, 00:08:11.951 "num_base_bdevs": 3, 00:08:11.951 "num_base_bdevs_discovered": 3, 00:08:11.951 "num_base_bdevs_operational": 3, 00:08:11.951 "base_bdevs_list": [ 00:08:11.951 { 00:08:11.951 "name": "BaseBdev1", 00:08:11.951 "uuid": "894397db-e180-4b08-9169-79dd988e769d", 00:08:11.951 "is_configured": true, 00:08:11.951 "data_offset": 0, 00:08:11.951 "data_size": 65536 00:08:11.951 }, 00:08:11.951 { 00:08:11.951 "name": "BaseBdev2", 00:08:11.951 "uuid": "3f8a7f14-30b9-428d-9561-1f5ffed3912a", 00:08:11.951 "is_configured": true, 00:08:11.951 "data_offset": 0, 00:08:11.951 "data_size": 65536 00:08:11.951 }, 00:08:11.951 { 00:08:11.951 "name": "BaseBdev3", 00:08:11.951 "uuid": "0d763b3f-be15-446c-85b0-8b197f2fe2ce", 00:08:11.951 "is_configured": true, 00:08:11.951 "data_offset": 0, 00:08:11.951 "data_size": 65536 00:08:11.951 } 00:08:11.951 ] 00:08:11.951 } 00:08:11.951 } 00:08:11.951 }' 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:11.951 BaseBdev2 00:08:11.951 BaseBdev3' 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.951 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.211 [2024-11-20 13:21:53.638334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.211 [2024-11-20 13:21:53.638411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.211 [2024-11-20 13:21:53.638498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.211 "name": "Existed_Raid", 00:08:12.211 "uuid": "d57caec0-2ef8-415e-b7d1-968d77ece882", 00:08:12.211 "strip_size_kb": 64, 00:08:12.211 "state": "offline", 00:08:12.211 "raid_level": "raid0", 00:08:12.211 "superblock": false, 00:08:12.211 "num_base_bdevs": 3, 00:08:12.211 "num_base_bdevs_discovered": 2, 00:08:12.211 "num_base_bdevs_operational": 2, 00:08:12.211 "base_bdevs_list": [ 00:08:12.211 { 00:08:12.211 "name": null, 00:08:12.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.211 "is_configured": false, 00:08:12.211 "data_offset": 0, 00:08:12.211 "data_size": 65536 00:08:12.211 }, 00:08:12.211 { 00:08:12.211 "name": "BaseBdev2", 00:08:12.211 "uuid": "3f8a7f14-30b9-428d-9561-1f5ffed3912a", 00:08:12.211 "is_configured": true, 00:08:12.211 "data_offset": 0, 00:08:12.211 "data_size": 65536 00:08:12.211 }, 00:08:12.211 { 00:08:12.211 "name": "BaseBdev3", 00:08:12.211 "uuid": "0d763b3f-be15-446c-85b0-8b197f2fe2ce", 00:08:12.211 "is_configured": true, 00:08:12.211 "data_offset": 0, 00:08:12.211 "data_size": 65536 00:08:12.211 } 00:08:12.211 ] 00:08:12.211 }' 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.211 13:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.471 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:12.471 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.471 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.471 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.471 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.471 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.471 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.731 [2024-11-20 13:21:54.148838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.731 [2024-11-20 13:21:54.216165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:12.731 [2024-11-20 13:21:54.216265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.731 BaseBdev2 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:12.731 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.732 [ 00:08:12.732 { 00:08:12.732 "name": "BaseBdev2", 00:08:12.732 "aliases": [ 00:08:12.732 "5c468a56-4b37-4e2f-9e1b-5b0965bd38d3" 00:08:12.732 ], 00:08:12.732 "product_name": "Malloc disk", 00:08:12.732 "block_size": 512, 00:08:12.732 "num_blocks": 65536, 00:08:12.732 "uuid": "5c468a56-4b37-4e2f-9e1b-5b0965bd38d3", 00:08:12.732 "assigned_rate_limits": { 00:08:12.732 "rw_ios_per_sec": 0, 00:08:12.732 "rw_mbytes_per_sec": 0, 00:08:12.732 "r_mbytes_per_sec": 0, 00:08:12.732 "w_mbytes_per_sec": 0 00:08:12.732 }, 00:08:12.732 "claimed": false, 00:08:12.732 "zoned": false, 00:08:12.732 "supported_io_types": { 00:08:12.732 "read": true, 00:08:12.732 "write": true, 00:08:12.732 "unmap": true, 00:08:12.732 "flush": true, 00:08:12.732 "reset": true, 00:08:12.732 "nvme_admin": false, 00:08:12.732 "nvme_io": false, 00:08:12.732 "nvme_io_md": false, 00:08:12.732 "write_zeroes": true, 00:08:12.732 "zcopy": true, 00:08:12.732 "get_zone_info": false, 00:08:12.732 "zone_management": false, 00:08:12.732 "zone_append": false, 00:08:12.732 "compare": false, 00:08:12.732 "compare_and_write": false, 00:08:12.732 "abort": true, 00:08:12.732 "seek_hole": false, 00:08:12.732 "seek_data": false, 00:08:12.732 "copy": true, 00:08:12.732 "nvme_iov_md": false 00:08:12.732 }, 00:08:12.732 "memory_domains": [ 00:08:12.732 { 00:08:12.732 "dma_device_id": "system", 00:08:12.732 "dma_device_type": 1 00:08:12.732 }, 00:08:12.732 { 00:08:12.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.732 "dma_device_type": 2 00:08:12.732 } 00:08:12.732 ], 00:08:12.732 "driver_specific": {} 00:08:12.732 } 00:08:12.732 ] 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.732 BaseBdev3 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.732 [ 00:08:12.732 { 00:08:12.732 "name": "BaseBdev3", 00:08:12.732 "aliases": [ 00:08:12.732 "503caffd-c41e-460c-a769-af314dde99dd" 00:08:12.732 ], 00:08:12.732 "product_name": "Malloc disk", 00:08:12.732 "block_size": 512, 00:08:12.732 "num_blocks": 65536, 00:08:12.732 "uuid": "503caffd-c41e-460c-a769-af314dde99dd", 00:08:12.732 "assigned_rate_limits": { 00:08:12.732 "rw_ios_per_sec": 0, 00:08:12.732 "rw_mbytes_per_sec": 0, 00:08:12.732 "r_mbytes_per_sec": 0, 00:08:12.732 "w_mbytes_per_sec": 0 00:08:12.732 }, 00:08:12.732 "claimed": false, 00:08:12.732 "zoned": false, 00:08:12.732 "supported_io_types": { 00:08:12.732 "read": true, 00:08:12.732 "write": true, 00:08:12.732 "unmap": true, 00:08:12.732 "flush": true, 00:08:12.732 "reset": true, 00:08:12.732 "nvme_admin": false, 00:08:12.732 "nvme_io": false, 00:08:12.732 "nvme_io_md": false, 00:08:12.732 "write_zeroes": true, 00:08:12.732 "zcopy": true, 00:08:12.732 "get_zone_info": false, 00:08:12.732 "zone_management": false, 00:08:12.732 "zone_append": false, 00:08:12.732 "compare": false, 00:08:12.732 "compare_and_write": false, 00:08:12.732 "abort": true, 00:08:12.732 "seek_hole": false, 00:08:12.732 "seek_data": false, 00:08:12.732 "copy": true, 00:08:12.732 "nvme_iov_md": false 00:08:12.732 }, 00:08:12.732 "memory_domains": [ 00:08:12.732 { 00:08:12.732 "dma_device_id": "system", 00:08:12.732 "dma_device_type": 1 00:08:12.732 }, 00:08:12.732 { 00:08:12.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.732 "dma_device_type": 2 00:08:12.732 } 00:08:12.732 ], 00:08:12.732 "driver_specific": {} 00:08:12.732 } 00:08:12.732 ] 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.732 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.732 [2024-11-20 13:21:54.397325] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:12.732 [2024-11-20 13:21:54.397432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:12.732 [2024-11-20 13:21:54.397478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.992 [2024-11-20 13:21:54.399335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.992 "name": "Existed_Raid", 00:08:12.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.992 "strip_size_kb": 64, 00:08:12.992 "state": "configuring", 00:08:12.992 "raid_level": "raid0", 00:08:12.992 "superblock": false, 00:08:12.992 "num_base_bdevs": 3, 00:08:12.992 "num_base_bdevs_discovered": 2, 00:08:12.992 "num_base_bdevs_operational": 3, 00:08:12.992 "base_bdevs_list": [ 00:08:12.992 { 00:08:12.992 "name": "BaseBdev1", 00:08:12.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.992 "is_configured": false, 00:08:12.992 "data_offset": 0, 00:08:12.992 "data_size": 0 00:08:12.992 }, 00:08:12.992 { 00:08:12.992 "name": "BaseBdev2", 00:08:12.992 "uuid": "5c468a56-4b37-4e2f-9e1b-5b0965bd38d3", 00:08:12.992 "is_configured": true, 00:08:12.992 "data_offset": 0, 00:08:12.992 "data_size": 65536 00:08:12.992 }, 00:08:12.992 { 00:08:12.992 "name": "BaseBdev3", 00:08:12.992 "uuid": "503caffd-c41e-460c-a769-af314dde99dd", 00:08:12.992 "is_configured": true, 00:08:12.992 "data_offset": 0, 00:08:12.992 "data_size": 65536 00:08:12.992 } 00:08:12.992 ] 00:08:12.992 }' 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.992 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.281 [2024-11-20 13:21:54.872554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.281 "name": "Existed_Raid", 00:08:13.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.281 "strip_size_kb": 64, 00:08:13.281 "state": "configuring", 00:08:13.281 "raid_level": "raid0", 00:08:13.281 "superblock": false, 00:08:13.281 "num_base_bdevs": 3, 00:08:13.281 "num_base_bdevs_discovered": 1, 00:08:13.281 "num_base_bdevs_operational": 3, 00:08:13.281 "base_bdevs_list": [ 00:08:13.281 { 00:08:13.281 "name": "BaseBdev1", 00:08:13.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.281 "is_configured": false, 00:08:13.281 "data_offset": 0, 00:08:13.281 "data_size": 0 00:08:13.281 }, 00:08:13.281 { 00:08:13.281 "name": null, 00:08:13.281 "uuid": "5c468a56-4b37-4e2f-9e1b-5b0965bd38d3", 00:08:13.281 "is_configured": false, 00:08:13.281 "data_offset": 0, 00:08:13.281 "data_size": 65536 00:08:13.281 }, 00:08:13.281 { 00:08:13.281 "name": "BaseBdev3", 00:08:13.281 "uuid": "503caffd-c41e-460c-a769-af314dde99dd", 00:08:13.281 "is_configured": true, 00:08:13.281 "data_offset": 0, 00:08:13.281 "data_size": 65536 00:08:13.281 } 00:08:13.281 ] 00:08:13.281 }' 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.281 13:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.850 [2024-11-20 13:21:55.394857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.850 BaseBdev1 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.850 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.850 [ 00:08:13.850 { 00:08:13.850 "name": "BaseBdev1", 00:08:13.850 "aliases": [ 00:08:13.850 "977166e6-fd52-42f7-a74a-5fac7b98934e" 00:08:13.850 ], 00:08:13.850 "product_name": "Malloc disk", 00:08:13.850 "block_size": 512, 00:08:13.850 "num_blocks": 65536, 00:08:13.850 "uuid": "977166e6-fd52-42f7-a74a-5fac7b98934e", 00:08:13.850 "assigned_rate_limits": { 00:08:13.850 "rw_ios_per_sec": 0, 00:08:13.850 "rw_mbytes_per_sec": 0, 00:08:13.850 "r_mbytes_per_sec": 0, 00:08:13.850 "w_mbytes_per_sec": 0 00:08:13.850 }, 00:08:13.850 "claimed": true, 00:08:13.850 "claim_type": "exclusive_write", 00:08:13.850 "zoned": false, 00:08:13.850 "supported_io_types": { 00:08:13.850 "read": true, 00:08:13.850 "write": true, 00:08:13.850 "unmap": true, 00:08:13.850 "flush": true, 00:08:13.850 "reset": true, 00:08:13.850 "nvme_admin": false, 00:08:13.850 "nvme_io": false, 00:08:13.850 "nvme_io_md": false, 00:08:13.850 "write_zeroes": true, 00:08:13.850 "zcopy": true, 00:08:13.851 "get_zone_info": false, 00:08:13.851 "zone_management": false, 00:08:13.851 "zone_append": false, 00:08:13.851 "compare": false, 00:08:13.851 "compare_and_write": false, 00:08:13.851 "abort": true, 00:08:13.851 "seek_hole": false, 00:08:13.851 "seek_data": false, 00:08:13.851 "copy": true, 00:08:13.851 "nvme_iov_md": false 00:08:13.851 }, 00:08:13.851 "memory_domains": [ 00:08:13.851 { 00:08:13.851 "dma_device_id": "system", 00:08:13.851 "dma_device_type": 1 00:08:13.851 }, 00:08:13.851 { 00:08:13.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.851 "dma_device_type": 2 00:08:13.851 } 00:08:13.851 ], 00:08:13.851 "driver_specific": {} 00:08:13.851 } 00:08:13.851 ] 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.851 "name": "Existed_Raid", 00:08:13.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.851 "strip_size_kb": 64, 00:08:13.851 "state": "configuring", 00:08:13.851 "raid_level": "raid0", 00:08:13.851 "superblock": false, 00:08:13.851 "num_base_bdevs": 3, 00:08:13.851 "num_base_bdevs_discovered": 2, 00:08:13.851 "num_base_bdevs_operational": 3, 00:08:13.851 "base_bdevs_list": [ 00:08:13.851 { 00:08:13.851 "name": "BaseBdev1", 00:08:13.851 "uuid": "977166e6-fd52-42f7-a74a-5fac7b98934e", 00:08:13.851 "is_configured": true, 00:08:13.851 "data_offset": 0, 00:08:13.851 "data_size": 65536 00:08:13.851 }, 00:08:13.851 { 00:08:13.851 "name": null, 00:08:13.851 "uuid": "5c468a56-4b37-4e2f-9e1b-5b0965bd38d3", 00:08:13.851 "is_configured": false, 00:08:13.851 "data_offset": 0, 00:08:13.851 "data_size": 65536 00:08:13.851 }, 00:08:13.851 { 00:08:13.851 "name": "BaseBdev3", 00:08:13.851 "uuid": "503caffd-c41e-460c-a769-af314dde99dd", 00:08:13.851 "is_configured": true, 00:08:13.851 "data_offset": 0, 00:08:13.851 "data_size": 65536 00:08:13.851 } 00:08:13.851 ] 00:08:13.851 }' 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.851 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.420 [2024-11-20 13:21:55.926046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.420 "name": "Existed_Raid", 00:08:14.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.420 "strip_size_kb": 64, 00:08:14.420 "state": "configuring", 00:08:14.420 "raid_level": "raid0", 00:08:14.420 "superblock": false, 00:08:14.420 "num_base_bdevs": 3, 00:08:14.420 "num_base_bdevs_discovered": 1, 00:08:14.420 "num_base_bdevs_operational": 3, 00:08:14.420 "base_bdevs_list": [ 00:08:14.420 { 00:08:14.420 "name": "BaseBdev1", 00:08:14.420 "uuid": "977166e6-fd52-42f7-a74a-5fac7b98934e", 00:08:14.420 "is_configured": true, 00:08:14.420 "data_offset": 0, 00:08:14.420 "data_size": 65536 00:08:14.420 }, 00:08:14.420 { 00:08:14.420 "name": null, 00:08:14.420 "uuid": "5c468a56-4b37-4e2f-9e1b-5b0965bd38d3", 00:08:14.420 "is_configured": false, 00:08:14.420 "data_offset": 0, 00:08:14.420 "data_size": 65536 00:08:14.420 }, 00:08:14.420 { 00:08:14.420 "name": null, 00:08:14.420 "uuid": "503caffd-c41e-460c-a769-af314dde99dd", 00:08:14.420 "is_configured": false, 00:08:14.420 "data_offset": 0, 00:08:14.420 "data_size": 65536 00:08:14.420 } 00:08:14.420 ] 00:08:14.420 }' 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.420 13:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.679 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:14.679 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.679 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.679 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.679 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.679 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:14.679 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:14.679 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.679 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.939 [2024-11-20 13:21:56.349309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.939 "name": "Existed_Raid", 00:08:14.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.939 "strip_size_kb": 64, 00:08:14.939 "state": "configuring", 00:08:14.939 "raid_level": "raid0", 00:08:14.939 "superblock": false, 00:08:14.939 "num_base_bdevs": 3, 00:08:14.939 "num_base_bdevs_discovered": 2, 00:08:14.939 "num_base_bdevs_operational": 3, 00:08:14.939 "base_bdevs_list": [ 00:08:14.939 { 00:08:14.939 "name": "BaseBdev1", 00:08:14.939 "uuid": "977166e6-fd52-42f7-a74a-5fac7b98934e", 00:08:14.939 "is_configured": true, 00:08:14.939 "data_offset": 0, 00:08:14.939 "data_size": 65536 00:08:14.939 }, 00:08:14.939 { 00:08:14.939 "name": null, 00:08:14.939 "uuid": "5c468a56-4b37-4e2f-9e1b-5b0965bd38d3", 00:08:14.939 "is_configured": false, 00:08:14.939 "data_offset": 0, 00:08:14.939 "data_size": 65536 00:08:14.939 }, 00:08:14.939 { 00:08:14.939 "name": "BaseBdev3", 00:08:14.939 "uuid": "503caffd-c41e-460c-a769-af314dde99dd", 00:08:14.939 "is_configured": true, 00:08:14.939 "data_offset": 0, 00:08:14.939 "data_size": 65536 00:08:14.939 } 00:08:14.939 ] 00:08:14.939 }' 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.939 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.199 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.199 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.199 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.199 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.199 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.458 [2024-11-20 13:21:56.884481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.458 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.459 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.459 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.459 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.459 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.459 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.459 "name": "Existed_Raid", 00:08:15.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.459 "strip_size_kb": 64, 00:08:15.459 "state": "configuring", 00:08:15.459 "raid_level": "raid0", 00:08:15.459 "superblock": false, 00:08:15.459 "num_base_bdevs": 3, 00:08:15.459 "num_base_bdevs_discovered": 1, 00:08:15.459 "num_base_bdevs_operational": 3, 00:08:15.459 "base_bdevs_list": [ 00:08:15.459 { 00:08:15.459 "name": null, 00:08:15.459 "uuid": "977166e6-fd52-42f7-a74a-5fac7b98934e", 00:08:15.459 "is_configured": false, 00:08:15.459 "data_offset": 0, 00:08:15.459 "data_size": 65536 00:08:15.459 }, 00:08:15.459 { 00:08:15.459 "name": null, 00:08:15.459 "uuid": "5c468a56-4b37-4e2f-9e1b-5b0965bd38d3", 00:08:15.459 "is_configured": false, 00:08:15.459 "data_offset": 0, 00:08:15.459 "data_size": 65536 00:08:15.459 }, 00:08:15.459 { 00:08:15.459 "name": "BaseBdev3", 00:08:15.459 "uuid": "503caffd-c41e-460c-a769-af314dde99dd", 00:08:15.459 "is_configured": true, 00:08:15.459 "data_offset": 0, 00:08:15.459 "data_size": 65536 00:08:15.459 } 00:08:15.459 ] 00:08:15.459 }' 00:08:15.459 13:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.459 13:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.719 [2024-11-20 13:21:57.346439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.719 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.981 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.981 "name": "Existed_Raid", 00:08:15.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.981 "strip_size_kb": 64, 00:08:15.981 "state": "configuring", 00:08:15.981 "raid_level": "raid0", 00:08:15.981 "superblock": false, 00:08:15.981 "num_base_bdevs": 3, 00:08:15.981 "num_base_bdevs_discovered": 2, 00:08:15.981 "num_base_bdevs_operational": 3, 00:08:15.981 "base_bdevs_list": [ 00:08:15.981 { 00:08:15.981 "name": null, 00:08:15.981 "uuid": "977166e6-fd52-42f7-a74a-5fac7b98934e", 00:08:15.981 "is_configured": false, 00:08:15.981 "data_offset": 0, 00:08:15.981 "data_size": 65536 00:08:15.981 }, 00:08:15.981 { 00:08:15.981 "name": "BaseBdev2", 00:08:15.981 "uuid": "5c468a56-4b37-4e2f-9e1b-5b0965bd38d3", 00:08:15.981 "is_configured": true, 00:08:15.981 "data_offset": 0, 00:08:15.981 "data_size": 65536 00:08:15.981 }, 00:08:15.981 { 00:08:15.981 "name": "BaseBdev3", 00:08:15.981 "uuid": "503caffd-c41e-460c-a769-af314dde99dd", 00:08:15.981 "is_configured": true, 00:08:15.981 "data_offset": 0, 00:08:15.981 "data_size": 65536 00:08:15.981 } 00:08:15.981 ] 00:08:15.981 }' 00:08:15.981 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.981 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.242 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:16.242 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.242 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.242 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.242 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.242 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:16.242 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.242 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:16.242 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.242 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.242 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 977166e6-fd52-42f7-a74a-5fac7b98934e 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.516 [2024-11-20 13:21:57.952464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:16.516 [2024-11-20 13:21:57.952510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:16.516 [2024-11-20 13:21:57.952519] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:16.516 [2024-11-20 13:21:57.952750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:16.516 [2024-11-20 13:21:57.952861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:16.516 [2024-11-20 13:21:57.952870] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:16.516 [2024-11-20 13:21:57.953085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.516 NewBaseBdev 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.516 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.516 [ 00:08:16.516 { 00:08:16.516 "name": "NewBaseBdev", 00:08:16.516 "aliases": [ 00:08:16.516 "977166e6-fd52-42f7-a74a-5fac7b98934e" 00:08:16.516 ], 00:08:16.516 "product_name": "Malloc disk", 00:08:16.516 "block_size": 512, 00:08:16.516 "num_blocks": 65536, 00:08:16.516 "uuid": "977166e6-fd52-42f7-a74a-5fac7b98934e", 00:08:16.516 "assigned_rate_limits": { 00:08:16.516 "rw_ios_per_sec": 0, 00:08:16.516 "rw_mbytes_per_sec": 0, 00:08:16.516 "r_mbytes_per_sec": 0, 00:08:16.516 "w_mbytes_per_sec": 0 00:08:16.516 }, 00:08:16.516 "claimed": true, 00:08:16.516 "claim_type": "exclusive_write", 00:08:16.516 "zoned": false, 00:08:16.516 "supported_io_types": { 00:08:16.516 "read": true, 00:08:16.516 "write": true, 00:08:16.516 "unmap": true, 00:08:16.516 "flush": true, 00:08:16.516 "reset": true, 00:08:16.516 "nvme_admin": false, 00:08:16.516 "nvme_io": false, 00:08:16.516 "nvme_io_md": false, 00:08:16.516 "write_zeroes": true, 00:08:16.516 "zcopy": true, 00:08:16.516 "get_zone_info": false, 00:08:16.516 "zone_management": false, 00:08:16.516 "zone_append": false, 00:08:16.516 "compare": false, 00:08:16.516 "compare_and_write": false, 00:08:16.516 "abort": true, 00:08:16.516 "seek_hole": false, 00:08:16.516 "seek_data": false, 00:08:16.516 "copy": true, 00:08:16.516 "nvme_iov_md": false 00:08:16.516 }, 00:08:16.516 "memory_domains": [ 00:08:16.516 { 00:08:16.516 "dma_device_id": "system", 00:08:16.516 "dma_device_type": 1 00:08:16.516 }, 00:08:16.516 { 00:08:16.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.517 "dma_device_type": 2 00:08:16.517 } 00:08:16.517 ], 00:08:16.517 "driver_specific": {} 00:08:16.517 } 00:08:16.517 ] 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.517 13:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.517 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.517 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.517 "name": "Existed_Raid", 00:08:16.517 "uuid": "3ee9bcce-4964-4980-8a6b-1d0da51b21ba", 00:08:16.517 "strip_size_kb": 64, 00:08:16.517 "state": "online", 00:08:16.517 "raid_level": "raid0", 00:08:16.517 "superblock": false, 00:08:16.517 "num_base_bdevs": 3, 00:08:16.517 "num_base_bdevs_discovered": 3, 00:08:16.517 "num_base_bdevs_operational": 3, 00:08:16.517 "base_bdevs_list": [ 00:08:16.517 { 00:08:16.517 "name": "NewBaseBdev", 00:08:16.517 "uuid": "977166e6-fd52-42f7-a74a-5fac7b98934e", 00:08:16.517 "is_configured": true, 00:08:16.517 "data_offset": 0, 00:08:16.517 "data_size": 65536 00:08:16.517 }, 00:08:16.517 { 00:08:16.517 "name": "BaseBdev2", 00:08:16.517 "uuid": "5c468a56-4b37-4e2f-9e1b-5b0965bd38d3", 00:08:16.517 "is_configured": true, 00:08:16.517 "data_offset": 0, 00:08:16.517 "data_size": 65536 00:08:16.517 }, 00:08:16.517 { 00:08:16.517 "name": "BaseBdev3", 00:08:16.517 "uuid": "503caffd-c41e-460c-a769-af314dde99dd", 00:08:16.517 "is_configured": true, 00:08:16.517 "data_offset": 0, 00:08:16.517 "data_size": 65536 00:08:16.517 } 00:08:16.517 ] 00:08:16.517 }' 00:08:16.517 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.517 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.776 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.776 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.776 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.776 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.776 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.776 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.776 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.776 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.776 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.776 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.776 [2024-11-20 13:21:58.432022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.035 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.035 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.035 "name": "Existed_Raid", 00:08:17.035 "aliases": [ 00:08:17.035 "3ee9bcce-4964-4980-8a6b-1d0da51b21ba" 00:08:17.035 ], 00:08:17.035 "product_name": "Raid Volume", 00:08:17.035 "block_size": 512, 00:08:17.035 "num_blocks": 196608, 00:08:17.035 "uuid": "3ee9bcce-4964-4980-8a6b-1d0da51b21ba", 00:08:17.035 "assigned_rate_limits": { 00:08:17.035 "rw_ios_per_sec": 0, 00:08:17.035 "rw_mbytes_per_sec": 0, 00:08:17.035 "r_mbytes_per_sec": 0, 00:08:17.035 "w_mbytes_per_sec": 0 00:08:17.035 }, 00:08:17.035 "claimed": false, 00:08:17.035 "zoned": false, 00:08:17.035 "supported_io_types": { 00:08:17.035 "read": true, 00:08:17.035 "write": true, 00:08:17.035 "unmap": true, 00:08:17.035 "flush": true, 00:08:17.035 "reset": true, 00:08:17.035 "nvme_admin": false, 00:08:17.035 "nvme_io": false, 00:08:17.035 "nvme_io_md": false, 00:08:17.035 "write_zeroes": true, 00:08:17.035 "zcopy": false, 00:08:17.035 "get_zone_info": false, 00:08:17.035 "zone_management": false, 00:08:17.035 "zone_append": false, 00:08:17.035 "compare": false, 00:08:17.035 "compare_and_write": false, 00:08:17.035 "abort": false, 00:08:17.035 "seek_hole": false, 00:08:17.035 "seek_data": false, 00:08:17.035 "copy": false, 00:08:17.035 "nvme_iov_md": false 00:08:17.035 }, 00:08:17.035 "memory_domains": [ 00:08:17.035 { 00:08:17.035 "dma_device_id": "system", 00:08:17.035 "dma_device_type": 1 00:08:17.035 }, 00:08:17.035 { 00:08:17.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.035 "dma_device_type": 2 00:08:17.035 }, 00:08:17.035 { 00:08:17.035 "dma_device_id": "system", 00:08:17.035 "dma_device_type": 1 00:08:17.035 }, 00:08:17.035 { 00:08:17.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.035 "dma_device_type": 2 00:08:17.035 }, 00:08:17.035 { 00:08:17.035 "dma_device_id": "system", 00:08:17.035 "dma_device_type": 1 00:08:17.035 }, 00:08:17.035 { 00:08:17.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.035 "dma_device_type": 2 00:08:17.035 } 00:08:17.035 ], 00:08:17.035 "driver_specific": { 00:08:17.035 "raid": { 00:08:17.035 "uuid": "3ee9bcce-4964-4980-8a6b-1d0da51b21ba", 00:08:17.035 "strip_size_kb": 64, 00:08:17.035 "state": "online", 00:08:17.035 "raid_level": "raid0", 00:08:17.036 "superblock": false, 00:08:17.036 "num_base_bdevs": 3, 00:08:17.036 "num_base_bdevs_discovered": 3, 00:08:17.036 "num_base_bdevs_operational": 3, 00:08:17.036 "base_bdevs_list": [ 00:08:17.036 { 00:08:17.036 "name": "NewBaseBdev", 00:08:17.036 "uuid": "977166e6-fd52-42f7-a74a-5fac7b98934e", 00:08:17.036 "is_configured": true, 00:08:17.036 "data_offset": 0, 00:08:17.036 "data_size": 65536 00:08:17.036 }, 00:08:17.036 { 00:08:17.036 "name": "BaseBdev2", 00:08:17.036 "uuid": "5c468a56-4b37-4e2f-9e1b-5b0965bd38d3", 00:08:17.036 "is_configured": true, 00:08:17.036 "data_offset": 0, 00:08:17.036 "data_size": 65536 00:08:17.036 }, 00:08:17.036 { 00:08:17.036 "name": "BaseBdev3", 00:08:17.036 "uuid": "503caffd-c41e-460c-a769-af314dde99dd", 00:08:17.036 "is_configured": true, 00:08:17.036 "data_offset": 0, 00:08:17.036 "data_size": 65536 00:08:17.036 } 00:08:17.036 ] 00:08:17.036 } 00:08:17.036 } 00:08:17.036 }' 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:17.036 BaseBdev2 00:08:17.036 BaseBdev3' 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.036 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.296 [2024-11-20 13:21:58.719336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.296 [2024-11-20 13:21:58.719404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.296 [2024-11-20 13:21:58.719492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.296 [2024-11-20 13:21:58.719572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.296 [2024-11-20 13:21:58.719591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74734 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74734 ']' 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74734 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74734 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.296 killing process with pid 74734 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74734' 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74734 00:08:17.296 [2024-11-20 13:21:58.756272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.296 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74734 00:08:17.296 [2024-11-20 13:21:58.787479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.555 13:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:17.555 00:08:17.555 real 0m8.903s 00:08:17.555 user 0m15.264s 00:08:17.555 sys 0m1.768s 00:08:17.555 ************************************ 00:08:17.555 END TEST raid_state_function_test 00:08:17.555 ************************************ 00:08:17.555 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.555 13:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.555 13:21:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:17.555 13:21:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:17.555 13:21:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.555 13:21:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.555 ************************************ 00:08:17.555 START TEST raid_state_function_test_sb 00:08:17.555 ************************************ 00:08:17.555 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:17.555 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75343 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:17.556 Process raid pid: 75343 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75343' 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75343 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75343 ']' 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.556 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.556 [2024-11-20 13:21:59.161452] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:17.556 [2024-11-20 13:21:59.161670] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.815 [2024-11-20 13:21:59.315431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.815 [2024-11-20 13:21:59.341603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.815 [2024-11-20 13:21:59.384532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.815 [2024-11-20 13:21:59.384567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.385 [2024-11-20 13:21:59.994512] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.385 [2024-11-20 13:21:59.994654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.385 [2024-11-20 13:21:59.994670] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.385 [2024-11-20 13:21:59.994680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.385 [2024-11-20 13:21:59.994686] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:18.385 [2024-11-20 13:21:59.994697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.385 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.385 "name": "Existed_Raid", 00:08:18.385 "uuid": "464cfa9e-b0f1-4fa5-812a-7822570f414d", 00:08:18.385 "strip_size_kb": 64, 00:08:18.385 "state": "configuring", 00:08:18.385 "raid_level": "raid0", 00:08:18.385 "superblock": true, 00:08:18.385 "num_base_bdevs": 3, 00:08:18.385 "num_base_bdevs_discovered": 0, 00:08:18.385 "num_base_bdevs_operational": 3, 00:08:18.385 "base_bdevs_list": [ 00:08:18.385 { 00:08:18.385 "name": "BaseBdev1", 00:08:18.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.385 "is_configured": false, 00:08:18.385 "data_offset": 0, 00:08:18.385 "data_size": 0 00:08:18.385 }, 00:08:18.385 { 00:08:18.385 "name": "BaseBdev2", 00:08:18.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.385 "is_configured": false, 00:08:18.385 "data_offset": 0, 00:08:18.385 "data_size": 0 00:08:18.385 }, 00:08:18.385 { 00:08:18.385 "name": "BaseBdev3", 00:08:18.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.385 "is_configured": false, 00:08:18.385 "data_offset": 0, 00:08:18.385 "data_size": 0 00:08:18.385 } 00:08:18.385 ] 00:08:18.385 }' 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.385 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.954 [2024-11-20 13:22:00.449594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.954 [2024-11-20 13:22:00.449674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.954 [2024-11-20 13:22:00.461598] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.954 [2024-11-20 13:22:00.461692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.954 [2024-11-20 13:22:00.461720] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.954 [2024-11-20 13:22:00.461743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.954 [2024-11-20 13:22:00.461762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:18.954 [2024-11-20 13:22:00.461783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.954 [2024-11-20 13:22:00.478479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.954 BaseBdev1 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.954 [ 00:08:18.954 { 00:08:18.954 "name": "BaseBdev1", 00:08:18.954 "aliases": [ 00:08:18.954 "557a72cb-0d9e-4ef4-9901-2d090162b73f" 00:08:18.954 ], 00:08:18.954 "product_name": "Malloc disk", 00:08:18.954 "block_size": 512, 00:08:18.954 "num_blocks": 65536, 00:08:18.954 "uuid": "557a72cb-0d9e-4ef4-9901-2d090162b73f", 00:08:18.954 "assigned_rate_limits": { 00:08:18.954 "rw_ios_per_sec": 0, 00:08:18.954 "rw_mbytes_per_sec": 0, 00:08:18.954 "r_mbytes_per_sec": 0, 00:08:18.954 "w_mbytes_per_sec": 0 00:08:18.954 }, 00:08:18.954 "claimed": true, 00:08:18.954 "claim_type": "exclusive_write", 00:08:18.954 "zoned": false, 00:08:18.954 "supported_io_types": { 00:08:18.954 "read": true, 00:08:18.954 "write": true, 00:08:18.954 "unmap": true, 00:08:18.954 "flush": true, 00:08:18.954 "reset": true, 00:08:18.954 "nvme_admin": false, 00:08:18.954 "nvme_io": false, 00:08:18.954 "nvme_io_md": false, 00:08:18.954 "write_zeroes": true, 00:08:18.954 "zcopy": true, 00:08:18.954 "get_zone_info": false, 00:08:18.954 "zone_management": false, 00:08:18.954 "zone_append": false, 00:08:18.954 "compare": false, 00:08:18.954 "compare_and_write": false, 00:08:18.954 "abort": true, 00:08:18.954 "seek_hole": false, 00:08:18.954 "seek_data": false, 00:08:18.954 "copy": true, 00:08:18.954 "nvme_iov_md": false 00:08:18.954 }, 00:08:18.954 "memory_domains": [ 00:08:18.954 { 00:08:18.954 "dma_device_id": "system", 00:08:18.954 "dma_device_type": 1 00:08:18.954 }, 00:08:18.954 { 00:08:18.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.954 "dma_device_type": 2 00:08:18.954 } 00:08:18.954 ], 00:08:18.954 "driver_specific": {} 00:08:18.954 } 00:08:18.954 ] 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.954 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.954 "name": "Existed_Raid", 00:08:18.954 "uuid": "0b11a9d7-175d-4f9c-b37c-d73442d21a36", 00:08:18.954 "strip_size_kb": 64, 00:08:18.954 "state": "configuring", 00:08:18.954 "raid_level": "raid0", 00:08:18.954 "superblock": true, 00:08:18.954 "num_base_bdevs": 3, 00:08:18.954 "num_base_bdevs_discovered": 1, 00:08:18.954 "num_base_bdevs_operational": 3, 00:08:18.954 "base_bdevs_list": [ 00:08:18.954 { 00:08:18.954 "name": "BaseBdev1", 00:08:18.954 "uuid": "557a72cb-0d9e-4ef4-9901-2d090162b73f", 00:08:18.954 "is_configured": true, 00:08:18.955 "data_offset": 2048, 00:08:18.955 "data_size": 63488 00:08:18.955 }, 00:08:18.955 { 00:08:18.955 "name": "BaseBdev2", 00:08:18.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.955 "is_configured": false, 00:08:18.955 "data_offset": 0, 00:08:18.955 "data_size": 0 00:08:18.955 }, 00:08:18.955 { 00:08:18.955 "name": "BaseBdev3", 00:08:18.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.955 "is_configured": false, 00:08:18.955 "data_offset": 0, 00:08:18.955 "data_size": 0 00:08:18.955 } 00:08:18.955 ] 00:08:18.955 }' 00:08:18.955 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.955 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.524 [2024-11-20 13:22:00.917778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.524 [2024-11-20 13:22:00.917835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.524 [2024-11-20 13:22:00.925791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.524 [2024-11-20 13:22:00.927627] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.524 [2024-11-20 13:22:00.927673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.524 [2024-11-20 13:22:00.927683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.524 [2024-11-20 13:22:00.927693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.524 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.524 "name": "Existed_Raid", 00:08:19.524 "uuid": "a4b743f4-2f4f-4590-855b-24407b2657e6", 00:08:19.524 "strip_size_kb": 64, 00:08:19.524 "state": "configuring", 00:08:19.524 "raid_level": "raid0", 00:08:19.524 "superblock": true, 00:08:19.524 "num_base_bdevs": 3, 00:08:19.524 "num_base_bdevs_discovered": 1, 00:08:19.524 "num_base_bdevs_operational": 3, 00:08:19.524 "base_bdevs_list": [ 00:08:19.524 { 00:08:19.524 "name": "BaseBdev1", 00:08:19.524 "uuid": "557a72cb-0d9e-4ef4-9901-2d090162b73f", 00:08:19.524 "is_configured": true, 00:08:19.524 "data_offset": 2048, 00:08:19.524 "data_size": 63488 00:08:19.524 }, 00:08:19.524 { 00:08:19.524 "name": "BaseBdev2", 00:08:19.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.524 "is_configured": false, 00:08:19.524 "data_offset": 0, 00:08:19.524 "data_size": 0 00:08:19.524 }, 00:08:19.524 { 00:08:19.525 "name": "BaseBdev3", 00:08:19.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.525 "is_configured": false, 00:08:19.525 "data_offset": 0, 00:08:19.525 "data_size": 0 00:08:19.525 } 00:08:19.525 ] 00:08:19.525 }' 00:08:19.525 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.525 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.784 [2024-11-20 13:22:01.352059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.784 BaseBdev2 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.784 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.784 [ 00:08:19.784 { 00:08:19.784 "name": "BaseBdev2", 00:08:19.784 "aliases": [ 00:08:19.784 "cd3d1316-392b-49fa-9f57-0353d65420df" 00:08:19.784 ], 00:08:19.784 "product_name": "Malloc disk", 00:08:19.784 "block_size": 512, 00:08:19.784 "num_blocks": 65536, 00:08:19.784 "uuid": "cd3d1316-392b-49fa-9f57-0353d65420df", 00:08:19.784 "assigned_rate_limits": { 00:08:19.785 "rw_ios_per_sec": 0, 00:08:19.785 "rw_mbytes_per_sec": 0, 00:08:19.785 "r_mbytes_per_sec": 0, 00:08:19.785 "w_mbytes_per_sec": 0 00:08:19.785 }, 00:08:19.785 "claimed": true, 00:08:19.785 "claim_type": "exclusive_write", 00:08:19.785 "zoned": false, 00:08:19.785 "supported_io_types": { 00:08:19.785 "read": true, 00:08:19.785 "write": true, 00:08:19.785 "unmap": true, 00:08:19.785 "flush": true, 00:08:19.785 "reset": true, 00:08:19.785 "nvme_admin": false, 00:08:19.785 "nvme_io": false, 00:08:19.785 "nvme_io_md": false, 00:08:19.785 "write_zeroes": true, 00:08:19.785 "zcopy": true, 00:08:19.785 "get_zone_info": false, 00:08:19.785 "zone_management": false, 00:08:19.785 "zone_append": false, 00:08:19.785 "compare": false, 00:08:19.785 "compare_and_write": false, 00:08:19.785 "abort": true, 00:08:19.785 "seek_hole": false, 00:08:19.785 "seek_data": false, 00:08:19.785 "copy": true, 00:08:19.785 "nvme_iov_md": false 00:08:19.785 }, 00:08:19.785 "memory_domains": [ 00:08:19.785 { 00:08:19.785 "dma_device_id": "system", 00:08:19.785 "dma_device_type": 1 00:08:19.785 }, 00:08:19.785 { 00:08:19.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.785 "dma_device_type": 2 00:08:19.785 } 00:08:19.785 ], 00:08:19.785 "driver_specific": {} 00:08:19.785 } 00:08:19.785 ] 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.785 "name": "Existed_Raid", 00:08:19.785 "uuid": "a4b743f4-2f4f-4590-855b-24407b2657e6", 00:08:19.785 "strip_size_kb": 64, 00:08:19.785 "state": "configuring", 00:08:19.785 "raid_level": "raid0", 00:08:19.785 "superblock": true, 00:08:19.785 "num_base_bdevs": 3, 00:08:19.785 "num_base_bdevs_discovered": 2, 00:08:19.785 "num_base_bdevs_operational": 3, 00:08:19.785 "base_bdevs_list": [ 00:08:19.785 { 00:08:19.785 "name": "BaseBdev1", 00:08:19.785 "uuid": "557a72cb-0d9e-4ef4-9901-2d090162b73f", 00:08:19.785 "is_configured": true, 00:08:19.785 "data_offset": 2048, 00:08:19.785 "data_size": 63488 00:08:19.785 }, 00:08:19.785 { 00:08:19.785 "name": "BaseBdev2", 00:08:19.785 "uuid": "cd3d1316-392b-49fa-9f57-0353d65420df", 00:08:19.785 "is_configured": true, 00:08:19.785 "data_offset": 2048, 00:08:19.785 "data_size": 63488 00:08:19.785 }, 00:08:19.785 { 00:08:19.785 "name": "BaseBdev3", 00:08:19.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.785 "is_configured": false, 00:08:19.785 "data_offset": 0, 00:08:19.785 "data_size": 0 00:08:19.785 } 00:08:19.785 ] 00:08:19.785 }' 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.785 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.353 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:20.353 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.353 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.353 [2024-11-20 13:22:01.843419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:20.353 [2024-11-20 13:22:01.843624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:20.353 [2024-11-20 13:22:01.843644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:20.353 [2024-11-20 13:22:01.843952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:20.353 BaseBdev3 00:08:20.354 [2024-11-20 13:22:01.844107] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:20.354 [2024-11-20 13:22:01.844123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:20.354 [2024-11-20 13:22:01.844243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.354 [ 00:08:20.354 { 00:08:20.354 "name": "BaseBdev3", 00:08:20.354 "aliases": [ 00:08:20.354 "a71403cd-0e7f-46b9-8dba-abf08543b9c1" 00:08:20.354 ], 00:08:20.354 "product_name": "Malloc disk", 00:08:20.354 "block_size": 512, 00:08:20.354 "num_blocks": 65536, 00:08:20.354 "uuid": "a71403cd-0e7f-46b9-8dba-abf08543b9c1", 00:08:20.354 "assigned_rate_limits": { 00:08:20.354 "rw_ios_per_sec": 0, 00:08:20.354 "rw_mbytes_per_sec": 0, 00:08:20.354 "r_mbytes_per_sec": 0, 00:08:20.354 "w_mbytes_per_sec": 0 00:08:20.354 }, 00:08:20.354 "claimed": true, 00:08:20.354 "claim_type": "exclusive_write", 00:08:20.354 "zoned": false, 00:08:20.354 "supported_io_types": { 00:08:20.354 "read": true, 00:08:20.354 "write": true, 00:08:20.354 "unmap": true, 00:08:20.354 "flush": true, 00:08:20.354 "reset": true, 00:08:20.354 "nvme_admin": false, 00:08:20.354 "nvme_io": false, 00:08:20.354 "nvme_io_md": false, 00:08:20.354 "write_zeroes": true, 00:08:20.354 "zcopy": true, 00:08:20.354 "get_zone_info": false, 00:08:20.354 "zone_management": false, 00:08:20.354 "zone_append": false, 00:08:20.354 "compare": false, 00:08:20.354 "compare_and_write": false, 00:08:20.354 "abort": true, 00:08:20.354 "seek_hole": false, 00:08:20.354 "seek_data": false, 00:08:20.354 "copy": true, 00:08:20.354 "nvme_iov_md": false 00:08:20.354 }, 00:08:20.354 "memory_domains": [ 00:08:20.354 { 00:08:20.354 "dma_device_id": "system", 00:08:20.354 "dma_device_type": 1 00:08:20.354 }, 00:08:20.354 { 00:08:20.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.354 "dma_device_type": 2 00:08:20.354 } 00:08:20.354 ], 00:08:20.354 "driver_specific": {} 00:08:20.354 } 00:08:20.354 ] 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.354 "name": "Existed_Raid", 00:08:20.354 "uuid": "a4b743f4-2f4f-4590-855b-24407b2657e6", 00:08:20.354 "strip_size_kb": 64, 00:08:20.354 "state": "online", 00:08:20.354 "raid_level": "raid0", 00:08:20.354 "superblock": true, 00:08:20.354 "num_base_bdevs": 3, 00:08:20.354 "num_base_bdevs_discovered": 3, 00:08:20.354 "num_base_bdevs_operational": 3, 00:08:20.354 "base_bdevs_list": [ 00:08:20.354 { 00:08:20.354 "name": "BaseBdev1", 00:08:20.354 "uuid": "557a72cb-0d9e-4ef4-9901-2d090162b73f", 00:08:20.354 "is_configured": true, 00:08:20.354 "data_offset": 2048, 00:08:20.354 "data_size": 63488 00:08:20.354 }, 00:08:20.354 { 00:08:20.354 "name": "BaseBdev2", 00:08:20.354 "uuid": "cd3d1316-392b-49fa-9f57-0353d65420df", 00:08:20.354 "is_configured": true, 00:08:20.354 "data_offset": 2048, 00:08:20.354 "data_size": 63488 00:08:20.354 }, 00:08:20.354 { 00:08:20.354 "name": "BaseBdev3", 00:08:20.354 "uuid": "a71403cd-0e7f-46b9-8dba-abf08543b9c1", 00:08:20.354 "is_configured": true, 00:08:20.354 "data_offset": 2048, 00:08:20.354 "data_size": 63488 00:08:20.354 } 00:08:20.354 ] 00:08:20.354 }' 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.354 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.923 [2024-11-20 13:22:02.362872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.923 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.923 "name": "Existed_Raid", 00:08:20.923 "aliases": [ 00:08:20.923 "a4b743f4-2f4f-4590-855b-24407b2657e6" 00:08:20.923 ], 00:08:20.923 "product_name": "Raid Volume", 00:08:20.923 "block_size": 512, 00:08:20.923 "num_blocks": 190464, 00:08:20.923 "uuid": "a4b743f4-2f4f-4590-855b-24407b2657e6", 00:08:20.923 "assigned_rate_limits": { 00:08:20.923 "rw_ios_per_sec": 0, 00:08:20.923 "rw_mbytes_per_sec": 0, 00:08:20.923 "r_mbytes_per_sec": 0, 00:08:20.923 "w_mbytes_per_sec": 0 00:08:20.923 }, 00:08:20.923 "claimed": false, 00:08:20.923 "zoned": false, 00:08:20.923 "supported_io_types": { 00:08:20.923 "read": true, 00:08:20.923 "write": true, 00:08:20.923 "unmap": true, 00:08:20.923 "flush": true, 00:08:20.923 "reset": true, 00:08:20.923 "nvme_admin": false, 00:08:20.923 "nvme_io": false, 00:08:20.923 "nvme_io_md": false, 00:08:20.923 "write_zeroes": true, 00:08:20.923 "zcopy": false, 00:08:20.923 "get_zone_info": false, 00:08:20.923 "zone_management": false, 00:08:20.923 "zone_append": false, 00:08:20.923 "compare": false, 00:08:20.923 "compare_and_write": false, 00:08:20.923 "abort": false, 00:08:20.923 "seek_hole": false, 00:08:20.923 "seek_data": false, 00:08:20.923 "copy": false, 00:08:20.923 "nvme_iov_md": false 00:08:20.923 }, 00:08:20.923 "memory_domains": [ 00:08:20.923 { 00:08:20.923 "dma_device_id": "system", 00:08:20.923 "dma_device_type": 1 00:08:20.923 }, 00:08:20.923 { 00:08:20.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.923 "dma_device_type": 2 00:08:20.923 }, 00:08:20.923 { 00:08:20.923 "dma_device_id": "system", 00:08:20.923 "dma_device_type": 1 00:08:20.923 }, 00:08:20.923 { 00:08:20.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.923 "dma_device_type": 2 00:08:20.923 }, 00:08:20.923 { 00:08:20.923 "dma_device_id": "system", 00:08:20.923 "dma_device_type": 1 00:08:20.923 }, 00:08:20.923 { 00:08:20.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.923 "dma_device_type": 2 00:08:20.923 } 00:08:20.923 ], 00:08:20.923 "driver_specific": { 00:08:20.923 "raid": { 00:08:20.923 "uuid": "a4b743f4-2f4f-4590-855b-24407b2657e6", 00:08:20.923 "strip_size_kb": 64, 00:08:20.923 "state": "online", 00:08:20.923 "raid_level": "raid0", 00:08:20.924 "superblock": true, 00:08:20.924 "num_base_bdevs": 3, 00:08:20.924 "num_base_bdevs_discovered": 3, 00:08:20.924 "num_base_bdevs_operational": 3, 00:08:20.924 "base_bdevs_list": [ 00:08:20.924 { 00:08:20.924 "name": "BaseBdev1", 00:08:20.924 "uuid": "557a72cb-0d9e-4ef4-9901-2d090162b73f", 00:08:20.924 "is_configured": true, 00:08:20.924 "data_offset": 2048, 00:08:20.924 "data_size": 63488 00:08:20.924 }, 00:08:20.924 { 00:08:20.924 "name": "BaseBdev2", 00:08:20.924 "uuid": "cd3d1316-392b-49fa-9f57-0353d65420df", 00:08:20.924 "is_configured": true, 00:08:20.924 "data_offset": 2048, 00:08:20.924 "data_size": 63488 00:08:20.924 }, 00:08:20.924 { 00:08:20.924 "name": "BaseBdev3", 00:08:20.924 "uuid": "a71403cd-0e7f-46b9-8dba-abf08543b9c1", 00:08:20.924 "is_configured": true, 00:08:20.924 "data_offset": 2048, 00:08:20.924 "data_size": 63488 00:08:20.924 } 00:08:20.924 ] 00:08:20.924 } 00:08:20.924 } 00:08:20.924 }' 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:20.924 BaseBdev2 00:08:20.924 BaseBdev3' 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.924 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.184 [2024-11-20 13:22:02.666095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:21.184 [2024-11-20 13:22:02.666122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.184 [2024-11-20 13:22:02.666181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.184 "name": "Existed_Raid", 00:08:21.184 "uuid": "a4b743f4-2f4f-4590-855b-24407b2657e6", 00:08:21.184 "strip_size_kb": 64, 00:08:21.184 "state": "offline", 00:08:21.184 "raid_level": "raid0", 00:08:21.184 "superblock": true, 00:08:21.184 "num_base_bdevs": 3, 00:08:21.184 "num_base_bdevs_discovered": 2, 00:08:21.184 "num_base_bdevs_operational": 2, 00:08:21.184 "base_bdevs_list": [ 00:08:21.184 { 00:08:21.184 "name": null, 00:08:21.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.184 "is_configured": false, 00:08:21.184 "data_offset": 0, 00:08:21.184 "data_size": 63488 00:08:21.184 }, 00:08:21.184 { 00:08:21.184 "name": "BaseBdev2", 00:08:21.184 "uuid": "cd3d1316-392b-49fa-9f57-0353d65420df", 00:08:21.184 "is_configured": true, 00:08:21.184 "data_offset": 2048, 00:08:21.184 "data_size": 63488 00:08:21.184 }, 00:08:21.184 { 00:08:21.184 "name": "BaseBdev3", 00:08:21.184 "uuid": "a71403cd-0e7f-46b9-8dba-abf08543b9c1", 00:08:21.184 "is_configured": true, 00:08:21.184 "data_offset": 2048, 00:08:21.184 "data_size": 63488 00:08:21.184 } 00:08:21.184 ] 00:08:21.184 }' 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.184 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.752 [2024-11-20 13:22:03.172494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.752 [2024-11-20 13:22:03.243669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:21.752 [2024-11-20 13:22:03.243716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.752 BaseBdev2 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.752 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.753 [ 00:08:21.753 { 00:08:21.753 "name": "BaseBdev2", 00:08:21.753 "aliases": [ 00:08:21.753 "b6e98277-e7db-43cb-b2c2-8f168f8b3955" 00:08:21.753 ], 00:08:21.753 "product_name": "Malloc disk", 00:08:21.753 "block_size": 512, 00:08:21.753 "num_blocks": 65536, 00:08:21.753 "uuid": "b6e98277-e7db-43cb-b2c2-8f168f8b3955", 00:08:21.753 "assigned_rate_limits": { 00:08:21.753 "rw_ios_per_sec": 0, 00:08:21.753 "rw_mbytes_per_sec": 0, 00:08:21.753 "r_mbytes_per_sec": 0, 00:08:21.753 "w_mbytes_per_sec": 0 00:08:21.753 }, 00:08:21.753 "claimed": false, 00:08:21.753 "zoned": false, 00:08:21.753 "supported_io_types": { 00:08:21.753 "read": true, 00:08:21.753 "write": true, 00:08:21.753 "unmap": true, 00:08:21.753 "flush": true, 00:08:21.753 "reset": true, 00:08:21.753 "nvme_admin": false, 00:08:21.753 "nvme_io": false, 00:08:21.753 "nvme_io_md": false, 00:08:21.753 "write_zeroes": true, 00:08:21.753 "zcopy": true, 00:08:21.753 "get_zone_info": false, 00:08:21.753 "zone_management": false, 00:08:21.753 "zone_append": false, 00:08:21.753 "compare": false, 00:08:21.753 "compare_and_write": false, 00:08:21.753 "abort": true, 00:08:21.753 "seek_hole": false, 00:08:21.753 "seek_data": false, 00:08:21.753 "copy": true, 00:08:21.753 "nvme_iov_md": false 00:08:21.753 }, 00:08:21.753 "memory_domains": [ 00:08:21.753 { 00:08:21.753 "dma_device_id": "system", 00:08:21.753 "dma_device_type": 1 00:08:21.753 }, 00:08:21.753 { 00:08:21.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.753 "dma_device_type": 2 00:08:21.753 } 00:08:21.753 ], 00:08:21.753 "driver_specific": {} 00:08:21.753 } 00:08:21.753 ] 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.753 BaseBdev3 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.753 [ 00:08:21.753 { 00:08:21.753 "name": "BaseBdev3", 00:08:21.753 "aliases": [ 00:08:21.753 "757dede4-5177-4768-970e-e45151a4d528" 00:08:21.753 ], 00:08:21.753 "product_name": "Malloc disk", 00:08:21.753 "block_size": 512, 00:08:21.753 "num_blocks": 65536, 00:08:21.753 "uuid": "757dede4-5177-4768-970e-e45151a4d528", 00:08:21.753 "assigned_rate_limits": { 00:08:21.753 "rw_ios_per_sec": 0, 00:08:21.753 "rw_mbytes_per_sec": 0, 00:08:21.753 "r_mbytes_per_sec": 0, 00:08:21.753 "w_mbytes_per_sec": 0 00:08:21.753 }, 00:08:21.753 "claimed": false, 00:08:21.753 "zoned": false, 00:08:21.753 "supported_io_types": { 00:08:21.753 "read": true, 00:08:21.753 "write": true, 00:08:21.753 "unmap": true, 00:08:21.753 "flush": true, 00:08:21.753 "reset": true, 00:08:21.753 "nvme_admin": false, 00:08:21.753 "nvme_io": false, 00:08:21.753 "nvme_io_md": false, 00:08:21.753 "write_zeroes": true, 00:08:21.753 "zcopy": true, 00:08:21.753 "get_zone_info": false, 00:08:21.753 "zone_management": false, 00:08:21.753 "zone_append": false, 00:08:21.753 "compare": false, 00:08:21.753 "compare_and_write": false, 00:08:21.753 "abort": true, 00:08:21.753 "seek_hole": false, 00:08:21.753 "seek_data": false, 00:08:21.753 "copy": true, 00:08:21.753 "nvme_iov_md": false 00:08:21.753 }, 00:08:21.753 "memory_domains": [ 00:08:21.753 { 00:08:21.753 "dma_device_id": "system", 00:08:21.753 "dma_device_type": 1 00:08:21.753 }, 00:08:21.753 { 00:08:21.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.753 "dma_device_type": 2 00:08:21.753 } 00:08:21.753 ], 00:08:21.753 "driver_specific": {} 00:08:21.753 } 00:08:21.753 ] 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:21.753 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.013 [2024-11-20 13:22:03.424232] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.013 [2024-11-20 13:22:03.424320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.013 [2024-11-20 13:22:03.424362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.013 [2024-11-20 13:22:03.426182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.013 "name": "Existed_Raid", 00:08:22.013 "uuid": "e6df9399-4626-48d4-b484-1544f111595c", 00:08:22.013 "strip_size_kb": 64, 00:08:22.013 "state": "configuring", 00:08:22.013 "raid_level": "raid0", 00:08:22.013 "superblock": true, 00:08:22.013 "num_base_bdevs": 3, 00:08:22.013 "num_base_bdevs_discovered": 2, 00:08:22.013 "num_base_bdevs_operational": 3, 00:08:22.013 "base_bdevs_list": [ 00:08:22.013 { 00:08:22.013 "name": "BaseBdev1", 00:08:22.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.013 "is_configured": false, 00:08:22.013 "data_offset": 0, 00:08:22.013 "data_size": 0 00:08:22.013 }, 00:08:22.013 { 00:08:22.013 "name": "BaseBdev2", 00:08:22.013 "uuid": "b6e98277-e7db-43cb-b2c2-8f168f8b3955", 00:08:22.013 "is_configured": true, 00:08:22.013 "data_offset": 2048, 00:08:22.013 "data_size": 63488 00:08:22.013 }, 00:08:22.013 { 00:08:22.013 "name": "BaseBdev3", 00:08:22.013 "uuid": "757dede4-5177-4768-970e-e45151a4d528", 00:08:22.013 "is_configured": true, 00:08:22.013 "data_offset": 2048, 00:08:22.013 "data_size": 63488 00:08:22.013 } 00:08:22.013 ] 00:08:22.013 }' 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.013 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.273 [2024-11-20 13:22:03.919679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.273 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.532 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.532 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.532 "name": "Existed_Raid", 00:08:22.532 "uuid": "e6df9399-4626-48d4-b484-1544f111595c", 00:08:22.532 "strip_size_kb": 64, 00:08:22.532 "state": "configuring", 00:08:22.532 "raid_level": "raid0", 00:08:22.532 "superblock": true, 00:08:22.532 "num_base_bdevs": 3, 00:08:22.532 "num_base_bdevs_discovered": 1, 00:08:22.532 "num_base_bdevs_operational": 3, 00:08:22.532 "base_bdevs_list": [ 00:08:22.532 { 00:08:22.532 "name": "BaseBdev1", 00:08:22.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.532 "is_configured": false, 00:08:22.532 "data_offset": 0, 00:08:22.532 "data_size": 0 00:08:22.532 }, 00:08:22.532 { 00:08:22.532 "name": null, 00:08:22.532 "uuid": "b6e98277-e7db-43cb-b2c2-8f168f8b3955", 00:08:22.532 "is_configured": false, 00:08:22.532 "data_offset": 0, 00:08:22.532 "data_size": 63488 00:08:22.532 }, 00:08:22.532 { 00:08:22.532 "name": "BaseBdev3", 00:08:22.532 "uuid": "757dede4-5177-4768-970e-e45151a4d528", 00:08:22.532 "is_configured": true, 00:08:22.532 "data_offset": 2048, 00:08:22.532 "data_size": 63488 00:08:22.532 } 00:08:22.532 ] 00:08:22.532 }' 00:08:22.532 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.532 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.792 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.792 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.792 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.792 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:22.792 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.792 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:22.792 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.792 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.792 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.792 [2024-11-20 13:22:04.426086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.792 BaseBdev1 00:08:22.792 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.792 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.793 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.793 [ 00:08:22.793 { 00:08:22.793 "name": "BaseBdev1", 00:08:22.793 "aliases": [ 00:08:22.793 "4ba49c3d-535b-4a1d-83fe-283655103d10" 00:08:22.793 ], 00:08:22.793 "product_name": "Malloc disk", 00:08:22.793 "block_size": 512, 00:08:22.793 "num_blocks": 65536, 00:08:22.793 "uuid": "4ba49c3d-535b-4a1d-83fe-283655103d10", 00:08:22.793 "assigned_rate_limits": { 00:08:22.793 "rw_ios_per_sec": 0, 00:08:22.793 "rw_mbytes_per_sec": 0, 00:08:22.793 "r_mbytes_per_sec": 0, 00:08:22.793 "w_mbytes_per_sec": 0 00:08:22.793 }, 00:08:22.793 "claimed": true, 00:08:22.793 "claim_type": "exclusive_write", 00:08:22.793 "zoned": false, 00:08:22.793 "supported_io_types": { 00:08:22.793 "read": true, 00:08:22.793 "write": true, 00:08:22.793 "unmap": true, 00:08:22.793 "flush": true, 00:08:22.793 "reset": true, 00:08:22.793 "nvme_admin": false, 00:08:22.793 "nvme_io": false, 00:08:22.793 "nvme_io_md": false, 00:08:22.793 "write_zeroes": true, 00:08:23.053 "zcopy": true, 00:08:23.053 "get_zone_info": false, 00:08:23.053 "zone_management": false, 00:08:23.053 "zone_append": false, 00:08:23.053 "compare": false, 00:08:23.053 "compare_and_write": false, 00:08:23.053 "abort": true, 00:08:23.053 "seek_hole": false, 00:08:23.053 "seek_data": false, 00:08:23.053 "copy": true, 00:08:23.053 "nvme_iov_md": false 00:08:23.053 }, 00:08:23.053 "memory_domains": [ 00:08:23.053 { 00:08:23.053 "dma_device_id": "system", 00:08:23.053 "dma_device_type": 1 00:08:23.053 }, 00:08:23.053 { 00:08:23.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.053 "dma_device_type": 2 00:08:23.053 } 00:08:23.053 ], 00:08:23.053 "driver_specific": {} 00:08:23.053 } 00:08:23.053 ] 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.053 "name": "Existed_Raid", 00:08:23.053 "uuid": "e6df9399-4626-48d4-b484-1544f111595c", 00:08:23.053 "strip_size_kb": 64, 00:08:23.053 "state": "configuring", 00:08:23.053 "raid_level": "raid0", 00:08:23.053 "superblock": true, 00:08:23.053 "num_base_bdevs": 3, 00:08:23.053 "num_base_bdevs_discovered": 2, 00:08:23.053 "num_base_bdevs_operational": 3, 00:08:23.053 "base_bdevs_list": [ 00:08:23.053 { 00:08:23.053 "name": "BaseBdev1", 00:08:23.053 "uuid": "4ba49c3d-535b-4a1d-83fe-283655103d10", 00:08:23.053 "is_configured": true, 00:08:23.053 "data_offset": 2048, 00:08:23.053 "data_size": 63488 00:08:23.053 }, 00:08:23.053 { 00:08:23.053 "name": null, 00:08:23.053 "uuid": "b6e98277-e7db-43cb-b2c2-8f168f8b3955", 00:08:23.053 "is_configured": false, 00:08:23.053 "data_offset": 0, 00:08:23.053 "data_size": 63488 00:08:23.053 }, 00:08:23.053 { 00:08:23.053 "name": "BaseBdev3", 00:08:23.053 "uuid": "757dede4-5177-4768-970e-e45151a4d528", 00:08:23.053 "is_configured": true, 00:08:23.053 "data_offset": 2048, 00:08:23.053 "data_size": 63488 00:08:23.053 } 00:08:23.053 ] 00:08:23.053 }' 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.053 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.313 [2024-11-20 13:22:04.941286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.313 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.574 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.574 "name": "Existed_Raid", 00:08:23.574 "uuid": "e6df9399-4626-48d4-b484-1544f111595c", 00:08:23.574 "strip_size_kb": 64, 00:08:23.574 "state": "configuring", 00:08:23.574 "raid_level": "raid0", 00:08:23.574 "superblock": true, 00:08:23.574 "num_base_bdevs": 3, 00:08:23.574 "num_base_bdevs_discovered": 1, 00:08:23.574 "num_base_bdevs_operational": 3, 00:08:23.574 "base_bdevs_list": [ 00:08:23.574 { 00:08:23.574 "name": "BaseBdev1", 00:08:23.574 "uuid": "4ba49c3d-535b-4a1d-83fe-283655103d10", 00:08:23.574 "is_configured": true, 00:08:23.574 "data_offset": 2048, 00:08:23.574 "data_size": 63488 00:08:23.574 }, 00:08:23.574 { 00:08:23.574 "name": null, 00:08:23.574 "uuid": "b6e98277-e7db-43cb-b2c2-8f168f8b3955", 00:08:23.574 "is_configured": false, 00:08:23.574 "data_offset": 0, 00:08:23.574 "data_size": 63488 00:08:23.574 }, 00:08:23.574 { 00:08:23.574 "name": null, 00:08:23.574 "uuid": "757dede4-5177-4768-970e-e45151a4d528", 00:08:23.574 "is_configured": false, 00:08:23.574 "data_offset": 0, 00:08:23.574 "data_size": 63488 00:08:23.574 } 00:08:23.574 ] 00:08:23.574 }' 00:08:23.574 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.574 13:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.834 [2024-11-20 13:22:05.408450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.834 "name": "Existed_Raid", 00:08:23.834 "uuid": "e6df9399-4626-48d4-b484-1544f111595c", 00:08:23.834 "strip_size_kb": 64, 00:08:23.834 "state": "configuring", 00:08:23.834 "raid_level": "raid0", 00:08:23.834 "superblock": true, 00:08:23.834 "num_base_bdevs": 3, 00:08:23.834 "num_base_bdevs_discovered": 2, 00:08:23.834 "num_base_bdevs_operational": 3, 00:08:23.834 "base_bdevs_list": [ 00:08:23.834 { 00:08:23.834 "name": "BaseBdev1", 00:08:23.834 "uuid": "4ba49c3d-535b-4a1d-83fe-283655103d10", 00:08:23.834 "is_configured": true, 00:08:23.834 "data_offset": 2048, 00:08:23.834 "data_size": 63488 00:08:23.834 }, 00:08:23.834 { 00:08:23.834 "name": null, 00:08:23.834 "uuid": "b6e98277-e7db-43cb-b2c2-8f168f8b3955", 00:08:23.834 "is_configured": false, 00:08:23.834 "data_offset": 0, 00:08:23.834 "data_size": 63488 00:08:23.834 }, 00:08:23.834 { 00:08:23.834 "name": "BaseBdev3", 00:08:23.834 "uuid": "757dede4-5177-4768-970e-e45151a4d528", 00:08:23.834 "is_configured": true, 00:08:23.834 "data_offset": 2048, 00:08:23.834 "data_size": 63488 00:08:23.834 } 00:08:23.834 ] 00:08:23.834 }' 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.834 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.403 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:24.403 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.403 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.403 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.403 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.404 [2024-11-20 13:22:05.911751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.404 "name": "Existed_Raid", 00:08:24.404 "uuid": "e6df9399-4626-48d4-b484-1544f111595c", 00:08:24.404 "strip_size_kb": 64, 00:08:24.404 "state": "configuring", 00:08:24.404 "raid_level": "raid0", 00:08:24.404 "superblock": true, 00:08:24.404 "num_base_bdevs": 3, 00:08:24.404 "num_base_bdevs_discovered": 1, 00:08:24.404 "num_base_bdevs_operational": 3, 00:08:24.404 "base_bdevs_list": [ 00:08:24.404 { 00:08:24.404 "name": null, 00:08:24.404 "uuid": "4ba49c3d-535b-4a1d-83fe-283655103d10", 00:08:24.404 "is_configured": false, 00:08:24.404 "data_offset": 0, 00:08:24.404 "data_size": 63488 00:08:24.404 }, 00:08:24.404 { 00:08:24.404 "name": null, 00:08:24.404 "uuid": "b6e98277-e7db-43cb-b2c2-8f168f8b3955", 00:08:24.404 "is_configured": false, 00:08:24.404 "data_offset": 0, 00:08:24.404 "data_size": 63488 00:08:24.404 }, 00:08:24.404 { 00:08:24.404 "name": "BaseBdev3", 00:08:24.404 "uuid": "757dede4-5177-4768-970e-e45151a4d528", 00:08:24.404 "is_configured": true, 00:08:24.404 "data_offset": 2048, 00:08:24.404 "data_size": 63488 00:08:24.404 } 00:08:24.404 ] 00:08:24.404 }' 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.404 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.973 [2024-11-20 13:22:06.441707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.973 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.973 "name": "Existed_Raid", 00:08:24.973 "uuid": "e6df9399-4626-48d4-b484-1544f111595c", 00:08:24.973 "strip_size_kb": 64, 00:08:24.973 "state": "configuring", 00:08:24.973 "raid_level": "raid0", 00:08:24.973 "superblock": true, 00:08:24.973 "num_base_bdevs": 3, 00:08:24.973 "num_base_bdevs_discovered": 2, 00:08:24.973 "num_base_bdevs_operational": 3, 00:08:24.973 "base_bdevs_list": [ 00:08:24.973 { 00:08:24.973 "name": null, 00:08:24.973 "uuid": "4ba49c3d-535b-4a1d-83fe-283655103d10", 00:08:24.973 "is_configured": false, 00:08:24.973 "data_offset": 0, 00:08:24.973 "data_size": 63488 00:08:24.973 }, 00:08:24.973 { 00:08:24.973 "name": "BaseBdev2", 00:08:24.973 "uuid": "b6e98277-e7db-43cb-b2c2-8f168f8b3955", 00:08:24.973 "is_configured": true, 00:08:24.973 "data_offset": 2048, 00:08:24.973 "data_size": 63488 00:08:24.973 }, 00:08:24.973 { 00:08:24.974 "name": "BaseBdev3", 00:08:24.974 "uuid": "757dede4-5177-4768-970e-e45151a4d528", 00:08:24.974 "is_configured": true, 00:08:24.974 "data_offset": 2048, 00:08:24.974 "data_size": 63488 00:08:24.974 } 00:08:24.974 ] 00:08:24.974 }' 00:08:24.974 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.974 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.543 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.543 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:25.543 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.543 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.543 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.543 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:25.543 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.543 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:25.543 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.543 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.543 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4ba49c3d-535b-4a1d-83fe-283655103d10 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.543 [2024-11-20 13:22:07.019776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:25.543 NewBaseBdev 00:08:25.543 [2024-11-20 13:22:07.020047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:25.543 [2024-11-20 13:22:07.020069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:25.543 [2024-11-20 13:22:07.020310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:25.543 [2024-11-20 13:22:07.020422] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:25.543 [2024-11-20 13:22:07.020432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:25.543 [2024-11-20 13:22:07.020538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.543 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.543 [ 00:08:25.543 { 00:08:25.543 "name": "NewBaseBdev", 00:08:25.543 "aliases": [ 00:08:25.543 "4ba49c3d-535b-4a1d-83fe-283655103d10" 00:08:25.543 ], 00:08:25.543 "product_name": "Malloc disk", 00:08:25.543 "block_size": 512, 00:08:25.544 "num_blocks": 65536, 00:08:25.544 "uuid": "4ba49c3d-535b-4a1d-83fe-283655103d10", 00:08:25.544 "assigned_rate_limits": { 00:08:25.544 "rw_ios_per_sec": 0, 00:08:25.544 "rw_mbytes_per_sec": 0, 00:08:25.544 "r_mbytes_per_sec": 0, 00:08:25.544 "w_mbytes_per_sec": 0 00:08:25.544 }, 00:08:25.544 "claimed": true, 00:08:25.544 "claim_type": "exclusive_write", 00:08:25.544 "zoned": false, 00:08:25.544 "supported_io_types": { 00:08:25.544 "read": true, 00:08:25.544 "write": true, 00:08:25.544 "unmap": true, 00:08:25.544 "flush": true, 00:08:25.544 "reset": true, 00:08:25.544 "nvme_admin": false, 00:08:25.544 "nvme_io": false, 00:08:25.544 "nvme_io_md": false, 00:08:25.544 "write_zeroes": true, 00:08:25.544 "zcopy": true, 00:08:25.544 "get_zone_info": false, 00:08:25.544 "zone_management": false, 00:08:25.544 "zone_append": false, 00:08:25.544 "compare": false, 00:08:25.544 "compare_and_write": false, 00:08:25.544 "abort": true, 00:08:25.544 "seek_hole": false, 00:08:25.544 "seek_data": false, 00:08:25.544 "copy": true, 00:08:25.544 "nvme_iov_md": false 00:08:25.544 }, 00:08:25.544 "memory_domains": [ 00:08:25.544 { 00:08:25.544 "dma_device_id": "system", 00:08:25.544 "dma_device_type": 1 00:08:25.544 }, 00:08:25.544 { 00:08:25.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.544 "dma_device_type": 2 00:08:25.544 } 00:08:25.544 ], 00:08:25.544 "driver_specific": {} 00:08:25.544 } 00:08:25.544 ] 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.544 "name": "Existed_Raid", 00:08:25.544 "uuid": "e6df9399-4626-48d4-b484-1544f111595c", 00:08:25.544 "strip_size_kb": 64, 00:08:25.544 "state": "online", 00:08:25.544 "raid_level": "raid0", 00:08:25.544 "superblock": true, 00:08:25.544 "num_base_bdevs": 3, 00:08:25.544 "num_base_bdevs_discovered": 3, 00:08:25.544 "num_base_bdevs_operational": 3, 00:08:25.544 "base_bdevs_list": [ 00:08:25.544 { 00:08:25.544 "name": "NewBaseBdev", 00:08:25.544 "uuid": "4ba49c3d-535b-4a1d-83fe-283655103d10", 00:08:25.544 "is_configured": true, 00:08:25.544 "data_offset": 2048, 00:08:25.544 "data_size": 63488 00:08:25.544 }, 00:08:25.544 { 00:08:25.544 "name": "BaseBdev2", 00:08:25.544 "uuid": "b6e98277-e7db-43cb-b2c2-8f168f8b3955", 00:08:25.544 "is_configured": true, 00:08:25.544 "data_offset": 2048, 00:08:25.544 "data_size": 63488 00:08:25.544 }, 00:08:25.544 { 00:08:25.544 "name": "BaseBdev3", 00:08:25.544 "uuid": "757dede4-5177-4768-970e-e45151a4d528", 00:08:25.544 "is_configured": true, 00:08:25.544 "data_offset": 2048, 00:08:25.544 "data_size": 63488 00:08:25.544 } 00:08:25.544 ] 00:08:25.544 }' 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.544 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.112 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:26.112 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:26.112 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.112 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.112 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.112 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.112 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.113 [2024-11-20 13:22:07.503659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.113 "name": "Existed_Raid", 00:08:26.113 "aliases": [ 00:08:26.113 "e6df9399-4626-48d4-b484-1544f111595c" 00:08:26.113 ], 00:08:26.113 "product_name": "Raid Volume", 00:08:26.113 "block_size": 512, 00:08:26.113 "num_blocks": 190464, 00:08:26.113 "uuid": "e6df9399-4626-48d4-b484-1544f111595c", 00:08:26.113 "assigned_rate_limits": { 00:08:26.113 "rw_ios_per_sec": 0, 00:08:26.113 "rw_mbytes_per_sec": 0, 00:08:26.113 "r_mbytes_per_sec": 0, 00:08:26.113 "w_mbytes_per_sec": 0 00:08:26.113 }, 00:08:26.113 "claimed": false, 00:08:26.113 "zoned": false, 00:08:26.113 "supported_io_types": { 00:08:26.113 "read": true, 00:08:26.113 "write": true, 00:08:26.113 "unmap": true, 00:08:26.113 "flush": true, 00:08:26.113 "reset": true, 00:08:26.113 "nvme_admin": false, 00:08:26.113 "nvme_io": false, 00:08:26.113 "nvme_io_md": false, 00:08:26.113 "write_zeroes": true, 00:08:26.113 "zcopy": false, 00:08:26.113 "get_zone_info": false, 00:08:26.113 "zone_management": false, 00:08:26.113 "zone_append": false, 00:08:26.113 "compare": false, 00:08:26.113 "compare_and_write": false, 00:08:26.113 "abort": false, 00:08:26.113 "seek_hole": false, 00:08:26.113 "seek_data": false, 00:08:26.113 "copy": false, 00:08:26.113 "nvme_iov_md": false 00:08:26.113 }, 00:08:26.113 "memory_domains": [ 00:08:26.113 { 00:08:26.113 "dma_device_id": "system", 00:08:26.113 "dma_device_type": 1 00:08:26.113 }, 00:08:26.113 { 00:08:26.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.113 "dma_device_type": 2 00:08:26.113 }, 00:08:26.113 { 00:08:26.113 "dma_device_id": "system", 00:08:26.113 "dma_device_type": 1 00:08:26.113 }, 00:08:26.113 { 00:08:26.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.113 "dma_device_type": 2 00:08:26.113 }, 00:08:26.113 { 00:08:26.113 "dma_device_id": "system", 00:08:26.113 "dma_device_type": 1 00:08:26.113 }, 00:08:26.113 { 00:08:26.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.113 "dma_device_type": 2 00:08:26.113 } 00:08:26.113 ], 00:08:26.113 "driver_specific": { 00:08:26.113 "raid": { 00:08:26.113 "uuid": "e6df9399-4626-48d4-b484-1544f111595c", 00:08:26.113 "strip_size_kb": 64, 00:08:26.113 "state": "online", 00:08:26.113 "raid_level": "raid0", 00:08:26.113 "superblock": true, 00:08:26.113 "num_base_bdevs": 3, 00:08:26.113 "num_base_bdevs_discovered": 3, 00:08:26.113 "num_base_bdevs_operational": 3, 00:08:26.113 "base_bdevs_list": [ 00:08:26.113 { 00:08:26.113 "name": "NewBaseBdev", 00:08:26.113 "uuid": "4ba49c3d-535b-4a1d-83fe-283655103d10", 00:08:26.113 "is_configured": true, 00:08:26.113 "data_offset": 2048, 00:08:26.113 "data_size": 63488 00:08:26.113 }, 00:08:26.113 { 00:08:26.113 "name": "BaseBdev2", 00:08:26.113 "uuid": "b6e98277-e7db-43cb-b2c2-8f168f8b3955", 00:08:26.113 "is_configured": true, 00:08:26.113 "data_offset": 2048, 00:08:26.113 "data_size": 63488 00:08:26.113 }, 00:08:26.113 { 00:08:26.113 "name": "BaseBdev3", 00:08:26.113 "uuid": "757dede4-5177-4768-970e-e45151a4d528", 00:08:26.113 "is_configured": true, 00:08:26.113 "data_offset": 2048, 00:08:26.113 "data_size": 63488 00:08:26.113 } 00:08:26.113 ] 00:08:26.113 } 00:08:26.113 } 00:08:26.113 }' 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:26.113 BaseBdev2 00:08:26.113 BaseBdev3' 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.113 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.372 [2024-11-20 13:22:07.798794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:26.372 [2024-11-20 13:22:07.798821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.372 [2024-11-20 13:22:07.798905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.372 [2024-11-20 13:22:07.798960] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.372 [2024-11-20 13:22:07.798971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75343 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75343 ']' 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75343 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75343 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75343' 00:08:26.372 killing process with pid 75343 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75343 00:08:26.372 [2024-11-20 13:22:07.849478] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.372 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75343 00:08:26.372 [2024-11-20 13:22:07.880973] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.631 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:26.631 00:08:26.631 real 0m9.014s 00:08:26.631 user 0m15.435s 00:08:26.631 sys 0m1.805s 00:08:26.631 13:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.631 ************************************ 00:08:26.631 END TEST raid_state_function_test_sb 00:08:26.631 ************************************ 00:08:26.631 13:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.631 13:22:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:26.631 13:22:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:26.631 13:22:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.631 13:22:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.631 ************************************ 00:08:26.631 START TEST raid_superblock_test 00:08:26.631 ************************************ 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75948 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75948 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75948 ']' 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.631 13:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.631 [2024-11-20 13:22:08.250030] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:26.631 [2024-11-20 13:22:08.250247] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75948 ] 00:08:26.890 [2024-11-20 13:22:08.403255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.890 [2024-11-20 13:22:08.430095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.890 [2024-11-20 13:22:08.473014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.890 [2024-11-20 13:22:08.473154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.458 malloc1 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.458 [2024-11-20 13:22:09.099687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:27.458 [2024-11-20 13:22:09.099789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.458 [2024-11-20 13:22:09.099825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:27.458 [2024-11-20 13:22:09.099870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.458 [2024-11-20 13:22:09.102055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.458 [2024-11-20 13:22:09.102129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:27.458 pt1 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.458 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.724 malloc2 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.724 [2024-11-20 13:22:09.132364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:27.724 [2024-11-20 13:22:09.132420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.724 [2024-11-20 13:22:09.132436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:27.724 [2024-11-20 13:22:09.132446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.724 [2024-11-20 13:22:09.134522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.724 [2024-11-20 13:22:09.134618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:27.724 pt2 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.724 malloc3 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.724 [2024-11-20 13:22:09.161123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:27.724 [2024-11-20 13:22:09.161252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.724 [2024-11-20 13:22:09.161289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:27.724 [2024-11-20 13:22:09.161320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.724 [2024-11-20 13:22:09.163415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.724 [2024-11-20 13:22:09.163492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:27.724 pt3 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.724 [2024-11-20 13:22:09.173158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:27.724 [2024-11-20 13:22:09.175066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:27.724 [2024-11-20 13:22:09.175156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:27.724 [2024-11-20 13:22:09.175316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:27.724 [2024-11-20 13:22:09.175360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:27.724 [2024-11-20 13:22:09.175622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:27.724 [2024-11-20 13:22:09.175783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:27.724 [2024-11-20 13:22:09.175823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:27.724 [2024-11-20 13:22:09.175976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.724 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.725 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.725 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.725 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.725 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.725 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.725 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.725 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.725 "name": "raid_bdev1", 00:08:27.725 "uuid": "d35e2657-a83f-426d-8ff6-687206f72249", 00:08:27.725 "strip_size_kb": 64, 00:08:27.725 "state": "online", 00:08:27.725 "raid_level": "raid0", 00:08:27.725 "superblock": true, 00:08:27.725 "num_base_bdevs": 3, 00:08:27.725 "num_base_bdevs_discovered": 3, 00:08:27.725 "num_base_bdevs_operational": 3, 00:08:27.725 "base_bdevs_list": [ 00:08:27.725 { 00:08:27.725 "name": "pt1", 00:08:27.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.725 "is_configured": true, 00:08:27.725 "data_offset": 2048, 00:08:27.725 "data_size": 63488 00:08:27.725 }, 00:08:27.725 { 00:08:27.725 "name": "pt2", 00:08:27.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.725 "is_configured": true, 00:08:27.725 "data_offset": 2048, 00:08:27.725 "data_size": 63488 00:08:27.725 }, 00:08:27.725 { 00:08:27.725 "name": "pt3", 00:08:27.725 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:27.725 "is_configured": true, 00:08:27.725 "data_offset": 2048, 00:08:27.725 "data_size": 63488 00:08:27.725 } 00:08:27.725 ] 00:08:27.725 }' 00:08:27.725 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.725 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.985 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:27.985 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:27.985 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.985 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.985 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.985 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.985 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.985 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.985 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.985 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.985 [2024-11-20 13:22:09.616709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.985 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.245 "name": "raid_bdev1", 00:08:28.245 "aliases": [ 00:08:28.245 "d35e2657-a83f-426d-8ff6-687206f72249" 00:08:28.245 ], 00:08:28.245 "product_name": "Raid Volume", 00:08:28.245 "block_size": 512, 00:08:28.245 "num_blocks": 190464, 00:08:28.245 "uuid": "d35e2657-a83f-426d-8ff6-687206f72249", 00:08:28.245 "assigned_rate_limits": { 00:08:28.245 "rw_ios_per_sec": 0, 00:08:28.245 "rw_mbytes_per_sec": 0, 00:08:28.245 "r_mbytes_per_sec": 0, 00:08:28.245 "w_mbytes_per_sec": 0 00:08:28.245 }, 00:08:28.245 "claimed": false, 00:08:28.245 "zoned": false, 00:08:28.245 "supported_io_types": { 00:08:28.245 "read": true, 00:08:28.245 "write": true, 00:08:28.245 "unmap": true, 00:08:28.245 "flush": true, 00:08:28.245 "reset": true, 00:08:28.245 "nvme_admin": false, 00:08:28.245 "nvme_io": false, 00:08:28.245 "nvme_io_md": false, 00:08:28.245 "write_zeroes": true, 00:08:28.245 "zcopy": false, 00:08:28.245 "get_zone_info": false, 00:08:28.245 "zone_management": false, 00:08:28.245 "zone_append": false, 00:08:28.245 "compare": false, 00:08:28.245 "compare_and_write": false, 00:08:28.245 "abort": false, 00:08:28.245 "seek_hole": false, 00:08:28.245 "seek_data": false, 00:08:28.245 "copy": false, 00:08:28.245 "nvme_iov_md": false 00:08:28.245 }, 00:08:28.245 "memory_domains": [ 00:08:28.245 { 00:08:28.245 "dma_device_id": "system", 00:08:28.245 "dma_device_type": 1 00:08:28.245 }, 00:08:28.245 { 00:08:28.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.245 "dma_device_type": 2 00:08:28.245 }, 00:08:28.245 { 00:08:28.245 "dma_device_id": "system", 00:08:28.245 "dma_device_type": 1 00:08:28.245 }, 00:08:28.245 { 00:08:28.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.245 "dma_device_type": 2 00:08:28.245 }, 00:08:28.245 { 00:08:28.245 "dma_device_id": "system", 00:08:28.245 "dma_device_type": 1 00:08:28.245 }, 00:08:28.245 { 00:08:28.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.245 "dma_device_type": 2 00:08:28.245 } 00:08:28.245 ], 00:08:28.245 "driver_specific": { 00:08:28.245 "raid": { 00:08:28.245 "uuid": "d35e2657-a83f-426d-8ff6-687206f72249", 00:08:28.245 "strip_size_kb": 64, 00:08:28.245 "state": "online", 00:08:28.245 "raid_level": "raid0", 00:08:28.245 "superblock": true, 00:08:28.245 "num_base_bdevs": 3, 00:08:28.245 "num_base_bdevs_discovered": 3, 00:08:28.245 "num_base_bdevs_operational": 3, 00:08:28.245 "base_bdevs_list": [ 00:08:28.245 { 00:08:28.245 "name": "pt1", 00:08:28.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.245 "is_configured": true, 00:08:28.245 "data_offset": 2048, 00:08:28.245 "data_size": 63488 00:08:28.245 }, 00:08:28.245 { 00:08:28.245 "name": "pt2", 00:08:28.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.245 "is_configured": true, 00:08:28.245 "data_offset": 2048, 00:08:28.245 "data_size": 63488 00:08:28.245 }, 00:08:28.245 { 00:08:28.245 "name": "pt3", 00:08:28.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:28.245 "is_configured": true, 00:08:28.245 "data_offset": 2048, 00:08:28.245 "data_size": 63488 00:08:28.245 } 00:08:28.245 ] 00:08:28.245 } 00:08:28.245 } 00:08:28.245 }' 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:28.245 pt2 00:08:28.245 pt3' 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.245 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.246 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:28.506 [2024-11-20 13:22:09.912212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d35e2657-a83f-426d-8ff6-687206f72249 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d35e2657-a83f-426d-8ff6-687206f72249 ']' 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.506 [2024-11-20 13:22:09.959790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.506 [2024-11-20 13:22:09.959824] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.506 [2024-11-20 13:22:09.959931] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.506 [2024-11-20 13:22:09.959994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.506 [2024-11-20 13:22:09.960018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.506 13:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:28.506 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.507 [2024-11-20 13:22:10.087674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:28.507 [2024-11-20 13:22:10.089623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:28.507 [2024-11-20 13:22:10.089672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:28.507 [2024-11-20 13:22:10.089726] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:28.507 [2024-11-20 13:22:10.089781] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:28.507 [2024-11-20 13:22:10.089828] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:28.507 [2024-11-20 13:22:10.089841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.507 [2024-11-20 13:22:10.089852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:28.507 request: 00:08:28.507 { 00:08:28.507 "name": "raid_bdev1", 00:08:28.507 "raid_level": "raid0", 00:08:28.507 "base_bdevs": [ 00:08:28.507 "malloc1", 00:08:28.507 "malloc2", 00:08:28.507 "malloc3" 00:08:28.507 ], 00:08:28.507 "strip_size_kb": 64, 00:08:28.507 "superblock": false, 00:08:28.507 "method": "bdev_raid_create", 00:08:28.507 "req_id": 1 00:08:28.507 } 00:08:28.507 Got JSON-RPC error response 00:08:28.507 response: 00:08:28.507 { 00:08:28.507 "code": -17, 00:08:28.507 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:28.507 } 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.507 [2024-11-20 13:22:10.139565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:28.507 [2024-11-20 13:22:10.139634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.507 [2024-11-20 13:22:10.139654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:28.507 [2024-11-20 13:22:10.139665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.507 [2024-11-20 13:22:10.141886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.507 [2024-11-20 13:22:10.141928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:28.507 [2024-11-20 13:22:10.142035] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:28.507 [2024-11-20 13:22:10.142096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:28.507 pt1 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.507 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.767 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.767 "name": "raid_bdev1", 00:08:28.767 "uuid": "d35e2657-a83f-426d-8ff6-687206f72249", 00:08:28.767 "strip_size_kb": 64, 00:08:28.767 "state": "configuring", 00:08:28.767 "raid_level": "raid0", 00:08:28.767 "superblock": true, 00:08:28.767 "num_base_bdevs": 3, 00:08:28.767 "num_base_bdevs_discovered": 1, 00:08:28.767 "num_base_bdevs_operational": 3, 00:08:28.767 "base_bdevs_list": [ 00:08:28.767 { 00:08:28.767 "name": "pt1", 00:08:28.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.767 "is_configured": true, 00:08:28.767 "data_offset": 2048, 00:08:28.767 "data_size": 63488 00:08:28.767 }, 00:08:28.767 { 00:08:28.767 "name": null, 00:08:28.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.767 "is_configured": false, 00:08:28.767 "data_offset": 2048, 00:08:28.767 "data_size": 63488 00:08:28.767 }, 00:08:28.767 { 00:08:28.767 "name": null, 00:08:28.767 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:28.767 "is_configured": false, 00:08:28.767 "data_offset": 2048, 00:08:28.767 "data_size": 63488 00:08:28.767 } 00:08:28.767 ] 00:08:28.767 }' 00:08:28.767 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.767 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.026 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:29.026 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.026 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.026 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.026 [2024-11-20 13:22:10.562829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.026 [2024-11-20 13:22:10.562922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.026 [2024-11-20 13:22:10.562944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:29.026 [2024-11-20 13:22:10.562958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.026 [2024-11-20 13:22:10.563375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.026 [2024-11-20 13:22:10.563402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.026 [2024-11-20 13:22:10.563480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:29.026 [2024-11-20 13:22:10.563507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.026 pt2 00:08:29.026 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.026 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:29.026 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.026 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.026 [2024-11-20 13:22:10.574804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:29.026 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.027 "name": "raid_bdev1", 00:08:29.027 "uuid": "d35e2657-a83f-426d-8ff6-687206f72249", 00:08:29.027 "strip_size_kb": 64, 00:08:29.027 "state": "configuring", 00:08:29.027 "raid_level": "raid0", 00:08:29.027 "superblock": true, 00:08:29.027 "num_base_bdevs": 3, 00:08:29.027 "num_base_bdevs_discovered": 1, 00:08:29.027 "num_base_bdevs_operational": 3, 00:08:29.027 "base_bdevs_list": [ 00:08:29.027 { 00:08:29.027 "name": "pt1", 00:08:29.027 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.027 "is_configured": true, 00:08:29.027 "data_offset": 2048, 00:08:29.027 "data_size": 63488 00:08:29.027 }, 00:08:29.027 { 00:08:29.027 "name": null, 00:08:29.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.027 "is_configured": false, 00:08:29.027 "data_offset": 0, 00:08:29.027 "data_size": 63488 00:08:29.027 }, 00:08:29.027 { 00:08:29.027 "name": null, 00:08:29.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:29.027 "is_configured": false, 00:08:29.027 "data_offset": 2048, 00:08:29.027 "data_size": 63488 00:08:29.027 } 00:08:29.027 ] 00:08:29.027 }' 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.027 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.597 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:29.597 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:29.597 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.597 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.597 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.597 [2024-11-20 13:22:10.994096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.597 [2024-11-20 13:22:10.994154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.597 [2024-11-20 13:22:10.994194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:29.597 [2024-11-20 13:22:10.994206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.597 [2024-11-20 13:22:10.994622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.597 [2024-11-20 13:22:10.994645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.597 [2024-11-20 13:22:10.994728] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:29.597 [2024-11-20 13:22:10.994754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.597 pt2 00:08:29.597 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.597 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:29.597 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:29.597 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:29.597 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.597 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.597 [2024-11-20 13:22:11.006064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:29.597 [2024-11-20 13:22:11.006111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.597 [2024-11-20 13:22:11.006129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:29.597 [2024-11-20 13:22:11.006146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.597 [2024-11-20 13:22:11.006486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.597 [2024-11-20 13:22:11.006509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:29.597 [2024-11-20 13:22:11.006567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:29.597 [2024-11-20 13:22:11.006590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:29.597 [2024-11-20 13:22:11.006693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:29.597 [2024-11-20 13:22:11.006707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:29.597 [2024-11-20 13:22:11.006929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:29.597 [2024-11-20 13:22:11.007049] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:29.597 [2024-11-20 13:22:11.007064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:29.597 [2024-11-20 13:22:11.007165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.597 pt3 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.597 "name": "raid_bdev1", 00:08:29.597 "uuid": "d35e2657-a83f-426d-8ff6-687206f72249", 00:08:29.597 "strip_size_kb": 64, 00:08:29.597 "state": "online", 00:08:29.597 "raid_level": "raid0", 00:08:29.597 "superblock": true, 00:08:29.597 "num_base_bdevs": 3, 00:08:29.597 "num_base_bdevs_discovered": 3, 00:08:29.597 "num_base_bdevs_operational": 3, 00:08:29.597 "base_bdevs_list": [ 00:08:29.597 { 00:08:29.597 "name": "pt1", 00:08:29.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.597 "is_configured": true, 00:08:29.597 "data_offset": 2048, 00:08:29.597 "data_size": 63488 00:08:29.597 }, 00:08:29.597 { 00:08:29.597 "name": "pt2", 00:08:29.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.597 "is_configured": true, 00:08:29.597 "data_offset": 2048, 00:08:29.597 "data_size": 63488 00:08:29.597 }, 00:08:29.597 { 00:08:29.597 "name": "pt3", 00:08:29.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:29.597 "is_configured": true, 00:08:29.597 "data_offset": 2048, 00:08:29.597 "data_size": 63488 00:08:29.597 } 00:08:29.597 ] 00:08:29.597 }' 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.597 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.858 [2024-11-20 13:22:11.425622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.858 "name": "raid_bdev1", 00:08:29.858 "aliases": [ 00:08:29.858 "d35e2657-a83f-426d-8ff6-687206f72249" 00:08:29.858 ], 00:08:29.858 "product_name": "Raid Volume", 00:08:29.858 "block_size": 512, 00:08:29.858 "num_blocks": 190464, 00:08:29.858 "uuid": "d35e2657-a83f-426d-8ff6-687206f72249", 00:08:29.858 "assigned_rate_limits": { 00:08:29.858 "rw_ios_per_sec": 0, 00:08:29.858 "rw_mbytes_per_sec": 0, 00:08:29.858 "r_mbytes_per_sec": 0, 00:08:29.858 "w_mbytes_per_sec": 0 00:08:29.858 }, 00:08:29.858 "claimed": false, 00:08:29.858 "zoned": false, 00:08:29.858 "supported_io_types": { 00:08:29.858 "read": true, 00:08:29.858 "write": true, 00:08:29.858 "unmap": true, 00:08:29.858 "flush": true, 00:08:29.858 "reset": true, 00:08:29.858 "nvme_admin": false, 00:08:29.858 "nvme_io": false, 00:08:29.858 "nvme_io_md": false, 00:08:29.858 "write_zeroes": true, 00:08:29.858 "zcopy": false, 00:08:29.858 "get_zone_info": false, 00:08:29.858 "zone_management": false, 00:08:29.858 "zone_append": false, 00:08:29.858 "compare": false, 00:08:29.858 "compare_and_write": false, 00:08:29.858 "abort": false, 00:08:29.858 "seek_hole": false, 00:08:29.858 "seek_data": false, 00:08:29.858 "copy": false, 00:08:29.858 "nvme_iov_md": false 00:08:29.858 }, 00:08:29.858 "memory_domains": [ 00:08:29.858 { 00:08:29.858 "dma_device_id": "system", 00:08:29.858 "dma_device_type": 1 00:08:29.858 }, 00:08:29.858 { 00:08:29.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.858 "dma_device_type": 2 00:08:29.858 }, 00:08:29.858 { 00:08:29.858 "dma_device_id": "system", 00:08:29.858 "dma_device_type": 1 00:08:29.858 }, 00:08:29.858 { 00:08:29.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.858 "dma_device_type": 2 00:08:29.858 }, 00:08:29.858 { 00:08:29.858 "dma_device_id": "system", 00:08:29.858 "dma_device_type": 1 00:08:29.858 }, 00:08:29.858 { 00:08:29.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.858 "dma_device_type": 2 00:08:29.858 } 00:08:29.858 ], 00:08:29.858 "driver_specific": { 00:08:29.858 "raid": { 00:08:29.858 "uuid": "d35e2657-a83f-426d-8ff6-687206f72249", 00:08:29.858 "strip_size_kb": 64, 00:08:29.858 "state": "online", 00:08:29.858 "raid_level": "raid0", 00:08:29.858 "superblock": true, 00:08:29.858 "num_base_bdevs": 3, 00:08:29.858 "num_base_bdevs_discovered": 3, 00:08:29.858 "num_base_bdevs_operational": 3, 00:08:29.858 "base_bdevs_list": [ 00:08:29.858 { 00:08:29.858 "name": "pt1", 00:08:29.858 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.858 "is_configured": true, 00:08:29.858 "data_offset": 2048, 00:08:29.858 "data_size": 63488 00:08:29.858 }, 00:08:29.858 { 00:08:29.858 "name": "pt2", 00:08:29.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.858 "is_configured": true, 00:08:29.858 "data_offset": 2048, 00:08:29.858 "data_size": 63488 00:08:29.858 }, 00:08:29.858 { 00:08:29.858 "name": "pt3", 00:08:29.858 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:29.858 "is_configured": true, 00:08:29.858 "data_offset": 2048, 00:08:29.858 "data_size": 63488 00:08:29.858 } 00:08:29.858 ] 00:08:29.858 } 00:08:29.858 } 00:08:29.858 }' 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:29.858 pt2 00:08:29.858 pt3' 00:08:29.858 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.118 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.118 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.118 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:30.118 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.118 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.118 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.118 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.118 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.118 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.118 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.118 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.119 [2024-11-20 13:22:11.697143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d35e2657-a83f-426d-8ff6-687206f72249 '!=' d35e2657-a83f-426d-8ff6-687206f72249 ']' 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75948 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75948 ']' 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75948 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75948 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.119 killing process with pid 75948 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75948' 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75948 00:08:30.119 [2024-11-20 13:22:11.763014] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.119 [2024-11-20 13:22:11.763103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.119 [2024-11-20 13:22:11.763170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.119 [2024-11-20 13:22:11.763179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:30.119 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75948 00:08:30.378 [2024-11-20 13:22:11.796756] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.378 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:30.378 00:08:30.378 real 0m3.853s 00:08:30.378 user 0m6.064s 00:08:30.378 sys 0m0.821s 00:08:30.378 13:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.378 13:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.378 ************************************ 00:08:30.378 END TEST raid_superblock_test 00:08:30.378 ************************************ 00:08:30.638 13:22:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:30.638 13:22:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:30.638 13:22:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.638 13:22:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.638 ************************************ 00:08:30.638 START TEST raid_read_error_test 00:08:30.638 ************************************ 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0Qm7Vl9Prl 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76190 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76190 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76190 ']' 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.638 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.638 [2024-11-20 13:22:12.168446] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:30.638 [2024-11-20 13:22:12.168577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76190 ] 00:08:30.897 [2024-11-20 13:22:12.320149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.897 [2024-11-20 13:22:12.346357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.897 [2024-11-20 13:22:12.388956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.897 [2024-11-20 13:22:12.389020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.465 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.465 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:31.465 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.465 13:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:31.465 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.465 13:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.465 BaseBdev1_malloc 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.465 true 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.465 [2024-11-20 13:22:13.031308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:31.465 [2024-11-20 13:22:13.031370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.465 [2024-11-20 13:22:13.031394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:31.465 [2024-11-20 13:22:13.031410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.465 [2024-11-20 13:22:13.033685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.465 [2024-11-20 13:22:13.033721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:31.465 BaseBdev1 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.465 BaseBdev2_malloc 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.465 true 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.465 [2024-11-20 13:22:13.068226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:31.465 [2024-11-20 13:22:13.068275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.465 [2024-11-20 13:22:13.068292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:31.465 [2024-11-20 13:22:13.068310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.465 [2024-11-20 13:22:13.070491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.465 [2024-11-20 13:22:13.070526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:31.465 BaseBdev2 00:08:31.465 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.466 BaseBdev3_malloc 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.466 true 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.466 [2024-11-20 13:22:13.096804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:31.466 [2024-11-20 13:22:13.096851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.466 [2024-11-20 13:22:13.096869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:31.466 [2024-11-20 13:22:13.096878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.466 [2024-11-20 13:22:13.099008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.466 [2024-11-20 13:22:13.099050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:31.466 BaseBdev3 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.466 [2024-11-20 13:22:13.104870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.466 [2024-11-20 13:22:13.106749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.466 [2024-11-20 13:22:13.106828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.466 [2024-11-20 13:22:13.107021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:31.466 [2024-11-20 13:22:13.107039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:31.466 [2024-11-20 13:22:13.107285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:08:31.466 [2024-11-20 13:22:13.107419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:31.466 [2024-11-20 13:22:13.107432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:31.466 [2024-11-20 13:22:13.107571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.466 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.725 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.725 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.725 "name": "raid_bdev1", 00:08:31.725 "uuid": "694184b9-23a9-4d5d-829d-65b919ffffa6", 00:08:31.725 "strip_size_kb": 64, 00:08:31.725 "state": "online", 00:08:31.725 "raid_level": "raid0", 00:08:31.725 "superblock": true, 00:08:31.725 "num_base_bdevs": 3, 00:08:31.725 "num_base_bdevs_discovered": 3, 00:08:31.725 "num_base_bdevs_operational": 3, 00:08:31.725 "base_bdevs_list": [ 00:08:31.725 { 00:08:31.725 "name": "BaseBdev1", 00:08:31.725 "uuid": "a70c7fb7-ab32-5753-992b-f57d76a449f1", 00:08:31.725 "is_configured": true, 00:08:31.725 "data_offset": 2048, 00:08:31.725 "data_size": 63488 00:08:31.725 }, 00:08:31.725 { 00:08:31.725 "name": "BaseBdev2", 00:08:31.725 "uuid": "2568d4ce-0508-57f9-bff9-59f91ffa1d9f", 00:08:31.725 "is_configured": true, 00:08:31.725 "data_offset": 2048, 00:08:31.725 "data_size": 63488 00:08:31.725 }, 00:08:31.725 { 00:08:31.725 "name": "BaseBdev3", 00:08:31.725 "uuid": "0932baaa-94e9-5bb2-9f63-e1d566ca4391", 00:08:31.725 "is_configured": true, 00:08:31.725 "data_offset": 2048, 00:08:31.725 "data_size": 63488 00:08:31.725 } 00:08:31.725 ] 00:08:31.725 }' 00:08:31.725 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.725 13:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.984 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:31.984 13:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:31.984 [2024-11-20 13:22:13.632346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:08:32.918 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.919 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.177 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.177 "name": "raid_bdev1", 00:08:33.177 "uuid": "694184b9-23a9-4d5d-829d-65b919ffffa6", 00:08:33.177 "strip_size_kb": 64, 00:08:33.177 "state": "online", 00:08:33.177 "raid_level": "raid0", 00:08:33.177 "superblock": true, 00:08:33.177 "num_base_bdevs": 3, 00:08:33.177 "num_base_bdevs_discovered": 3, 00:08:33.177 "num_base_bdevs_operational": 3, 00:08:33.177 "base_bdevs_list": [ 00:08:33.177 { 00:08:33.177 "name": "BaseBdev1", 00:08:33.177 "uuid": "a70c7fb7-ab32-5753-992b-f57d76a449f1", 00:08:33.177 "is_configured": true, 00:08:33.177 "data_offset": 2048, 00:08:33.177 "data_size": 63488 00:08:33.177 }, 00:08:33.177 { 00:08:33.177 "name": "BaseBdev2", 00:08:33.177 "uuid": "2568d4ce-0508-57f9-bff9-59f91ffa1d9f", 00:08:33.177 "is_configured": true, 00:08:33.177 "data_offset": 2048, 00:08:33.177 "data_size": 63488 00:08:33.177 }, 00:08:33.177 { 00:08:33.177 "name": "BaseBdev3", 00:08:33.177 "uuid": "0932baaa-94e9-5bb2-9f63-e1d566ca4391", 00:08:33.177 "is_configured": true, 00:08:33.177 "data_offset": 2048, 00:08:33.177 "data_size": 63488 00:08:33.177 } 00:08:33.177 ] 00:08:33.177 }' 00:08:33.177 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.177 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.436 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.436 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.436 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.436 [2024-11-20 13:22:14.995808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.436 [2024-11-20 13:22:14.995859] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.436 [2024-11-20 13:22:14.998440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.436 [2024-11-20 13:22:14.998490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.436 [2024-11-20 13:22:14.998525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.436 [2024-11-20 13:22:14.998535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:33.436 { 00:08:33.436 "results": [ 00:08:33.436 { 00:08:33.436 "job": "raid_bdev1", 00:08:33.436 "core_mask": "0x1", 00:08:33.436 "workload": "randrw", 00:08:33.436 "percentage": 50, 00:08:33.436 "status": "finished", 00:08:33.436 "queue_depth": 1, 00:08:33.436 "io_size": 131072, 00:08:33.436 "runtime": 1.364292, 00:08:33.436 "iops": 16514.79302084891, 00:08:33.436 "mibps": 2064.3491276061136, 00:08:33.436 "io_failed": 1, 00:08:33.437 "io_timeout": 0, 00:08:33.437 "avg_latency_us": 83.86518806440834, 00:08:33.437 "min_latency_us": 20.010480349344977, 00:08:33.437 "max_latency_us": 1359.3711790393013 00:08:33.437 } 00:08:33.437 ], 00:08:33.437 "core_count": 1 00:08:33.437 } 00:08:33.437 13:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.437 13:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76190 00:08:33.437 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76190 ']' 00:08:33.437 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76190 00:08:33.437 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:33.437 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.437 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76190 00:08:33.437 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.437 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.437 killing process with pid 76190 00:08:33.437 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76190' 00:08:33.437 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76190 00:08:33.437 [2024-11-20 13:22:15.044197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.437 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76190 00:08:33.437 [2024-11-20 13:22:15.071113] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.696 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0Qm7Vl9Prl 00:08:33.696 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:33.696 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:33.696 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:33.696 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:33.696 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.696 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.696 13:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:33.696 00:08:33.696 real 0m3.206s 00:08:33.696 user 0m4.108s 00:08:33.696 sys 0m0.493s 00:08:33.696 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.696 13:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.696 ************************************ 00:08:33.696 END TEST raid_read_error_test 00:08:33.696 ************************************ 00:08:33.696 13:22:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:33.696 13:22:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:33.696 13:22:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.696 13:22:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.696 ************************************ 00:08:33.696 START TEST raid_write_error_test 00:08:33.696 ************************************ 00:08:33.696 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:33.696 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:33.696 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:33.697 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:33.956 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VQ5TUEUDUx 00:08:33.956 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76319 00:08:33.956 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:33.956 13:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76319 00:08:33.956 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76319 ']' 00:08:33.956 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.956 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.956 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.956 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.956 13:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.956 [2024-11-20 13:22:15.445762] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:33.956 [2024-11-20 13:22:15.445996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76319 ] 00:08:33.956 [2024-11-20 13:22:15.598946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.216 [2024-11-20 13:22:15.627886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.216 [2024-11-20 13:22:15.670739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.216 [2024-11-20 13:22:15.670787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.786 BaseBdev1_malloc 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.786 true 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.786 [2024-11-20 13:22:16.305108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:34.786 [2024-11-20 13:22:16.305161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.786 [2024-11-20 13:22:16.305179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:34.786 [2024-11-20 13:22:16.305188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.786 [2024-11-20 13:22:16.307301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.786 [2024-11-20 13:22:16.307337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:34.786 BaseBdev1 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.786 BaseBdev2_malloc 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.786 true 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.786 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.786 [2024-11-20 13:22:16.345716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:34.786 [2024-11-20 13:22:16.345769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.786 [2024-11-20 13:22:16.345789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:34.786 [2024-11-20 13:22:16.345806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.786 [2024-11-20 13:22:16.348260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.787 [2024-11-20 13:22:16.348354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:34.787 BaseBdev2 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.787 BaseBdev3_malloc 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.787 true 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.787 [2024-11-20 13:22:16.386506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:34.787 [2024-11-20 13:22:16.386556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.787 [2024-11-20 13:22:16.386577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:34.787 [2024-11-20 13:22:16.386586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.787 [2024-11-20 13:22:16.388694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.787 [2024-11-20 13:22:16.388731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:34.787 BaseBdev3 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.787 [2024-11-20 13:22:16.398553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.787 [2024-11-20 13:22:16.400449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.787 [2024-11-20 13:22:16.400519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:34.787 [2024-11-20 13:22:16.400689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:34.787 [2024-11-20 13:22:16.400703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:34.787 [2024-11-20 13:22:16.400947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:08:34.787 [2024-11-20 13:22:16.401103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:34.787 [2024-11-20 13:22:16.401112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:34.787 [2024-11-20 13:22:16.401255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.787 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.047 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.047 "name": "raid_bdev1", 00:08:35.047 "uuid": "5edfe7c8-6c13-4ebd-8534-9e1584eb8cb4", 00:08:35.047 "strip_size_kb": 64, 00:08:35.047 "state": "online", 00:08:35.047 "raid_level": "raid0", 00:08:35.047 "superblock": true, 00:08:35.047 "num_base_bdevs": 3, 00:08:35.047 "num_base_bdevs_discovered": 3, 00:08:35.047 "num_base_bdevs_operational": 3, 00:08:35.047 "base_bdevs_list": [ 00:08:35.047 { 00:08:35.047 "name": "BaseBdev1", 00:08:35.047 "uuid": "ec9b3d06-1f09-5241-8810-88a760fe0585", 00:08:35.047 "is_configured": true, 00:08:35.047 "data_offset": 2048, 00:08:35.047 "data_size": 63488 00:08:35.047 }, 00:08:35.047 { 00:08:35.047 "name": "BaseBdev2", 00:08:35.047 "uuid": "a6c5b0a0-f3a7-5fb4-b978-b28f539552a9", 00:08:35.047 "is_configured": true, 00:08:35.047 "data_offset": 2048, 00:08:35.047 "data_size": 63488 00:08:35.047 }, 00:08:35.047 { 00:08:35.047 "name": "BaseBdev3", 00:08:35.047 "uuid": "f2878c7c-d560-53f2-b3e6-b7a726e1e9d0", 00:08:35.047 "is_configured": true, 00:08:35.047 "data_offset": 2048, 00:08:35.047 "data_size": 63488 00:08:35.047 } 00:08:35.047 ] 00:08:35.047 }' 00:08:35.047 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.047 13:22:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.307 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:35.307 13:22:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:35.307 [2024-11-20 13:22:16.890424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.245 "name": "raid_bdev1", 00:08:36.245 "uuid": "5edfe7c8-6c13-4ebd-8534-9e1584eb8cb4", 00:08:36.245 "strip_size_kb": 64, 00:08:36.245 "state": "online", 00:08:36.245 "raid_level": "raid0", 00:08:36.245 "superblock": true, 00:08:36.245 "num_base_bdevs": 3, 00:08:36.245 "num_base_bdevs_discovered": 3, 00:08:36.245 "num_base_bdevs_operational": 3, 00:08:36.245 "base_bdevs_list": [ 00:08:36.245 { 00:08:36.245 "name": "BaseBdev1", 00:08:36.245 "uuid": "ec9b3d06-1f09-5241-8810-88a760fe0585", 00:08:36.245 "is_configured": true, 00:08:36.245 "data_offset": 2048, 00:08:36.245 "data_size": 63488 00:08:36.245 }, 00:08:36.245 { 00:08:36.245 "name": "BaseBdev2", 00:08:36.245 "uuid": "a6c5b0a0-f3a7-5fb4-b978-b28f539552a9", 00:08:36.245 "is_configured": true, 00:08:36.245 "data_offset": 2048, 00:08:36.245 "data_size": 63488 00:08:36.245 }, 00:08:36.245 { 00:08:36.245 "name": "BaseBdev3", 00:08:36.245 "uuid": "f2878c7c-d560-53f2-b3e6-b7a726e1e9d0", 00:08:36.245 "is_configured": true, 00:08:36.245 "data_offset": 2048, 00:08:36.245 "data_size": 63488 00:08:36.245 } 00:08:36.245 ] 00:08:36.245 }' 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.245 13:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.839 [2024-11-20 13:22:18.236066] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.839 [2024-11-20 13:22:18.236100] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.839 [2024-11-20 13:22:18.238791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.839 [2024-11-20 13:22:18.238845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.839 [2024-11-20 13:22:18.238882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.839 [2024-11-20 13:22:18.238901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:36.839 { 00:08:36.839 "results": [ 00:08:36.839 { 00:08:36.839 "job": "raid_bdev1", 00:08:36.839 "core_mask": "0x1", 00:08:36.839 "workload": "randrw", 00:08:36.839 "percentage": 50, 00:08:36.839 "status": "finished", 00:08:36.839 "queue_depth": 1, 00:08:36.839 "io_size": 131072, 00:08:36.839 "runtime": 1.345587, 00:08:36.839 "iops": 13126.61314355742, 00:08:36.839 "mibps": 1640.8266429446776, 00:08:36.839 "io_failed": 1, 00:08:36.839 "io_timeout": 0, 00:08:36.839 "avg_latency_us": 106.56262261882159, 00:08:36.839 "min_latency_us": 26.494323144104804, 00:08:36.839 "max_latency_us": 1459.5353711790392 00:08:36.839 } 00:08:36.839 ], 00:08:36.839 "core_count": 1 00:08:36.839 } 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76319 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76319 ']' 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76319 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76319 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.839 killing process with pid 76319 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76319' 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76319 00:08:36.839 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76319 00:08:36.839 [2024-11-20 13:22:18.281596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.839 [2024-11-20 13:22:18.307719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.100 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VQ5TUEUDUx 00:08:37.100 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:37.100 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:37.100 ************************************ 00:08:37.100 END TEST raid_write_error_test 00:08:37.100 ************************************ 00:08:37.100 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:37.100 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:37.100 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.100 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:37.100 13:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:37.100 00:08:37.100 real 0m3.167s 00:08:37.100 user 0m3.979s 00:08:37.100 sys 0m0.489s 00:08:37.100 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.100 13:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.100 13:22:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:37.100 13:22:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:37.100 13:22:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:37.100 13:22:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.100 13:22:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.100 ************************************ 00:08:37.100 START TEST raid_state_function_test 00:08:37.100 ************************************ 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76446 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76446' 00:08:37.100 Process raid pid: 76446 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76446 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 76446 ']' 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.100 13:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.100 [2024-11-20 13:22:18.673199] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:37.100 [2024-11-20 13:22:18.673741] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.360 [2024-11-20 13:22:18.807952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.360 [2024-11-20 13:22:18.834360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.360 [2024-11-20 13:22:18.877359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.360 [2024-11-20 13:22:18.877411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.928 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.928 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:37.928 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.928 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.929 [2024-11-20 13:22:19.519279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.929 [2024-11-20 13:22:19.519345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.929 [2024-11-20 13:22:19.519377] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.929 [2024-11-20 13:22:19.519387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.929 [2024-11-20 13:22:19.519393] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.929 [2024-11-20 13:22:19.519404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.929 "name": "Existed_Raid", 00:08:37.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.929 "strip_size_kb": 64, 00:08:37.929 "state": "configuring", 00:08:37.929 "raid_level": "concat", 00:08:37.929 "superblock": false, 00:08:37.929 "num_base_bdevs": 3, 00:08:37.929 "num_base_bdevs_discovered": 0, 00:08:37.929 "num_base_bdevs_operational": 3, 00:08:37.929 "base_bdevs_list": [ 00:08:37.929 { 00:08:37.929 "name": "BaseBdev1", 00:08:37.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.929 "is_configured": false, 00:08:37.929 "data_offset": 0, 00:08:37.929 "data_size": 0 00:08:37.929 }, 00:08:37.929 { 00:08:37.929 "name": "BaseBdev2", 00:08:37.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.929 "is_configured": false, 00:08:37.929 "data_offset": 0, 00:08:37.929 "data_size": 0 00:08:37.929 }, 00:08:37.929 { 00:08:37.929 "name": "BaseBdev3", 00:08:37.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.929 "is_configured": false, 00:08:37.929 "data_offset": 0, 00:08:37.929 "data_size": 0 00:08:37.929 } 00:08:37.929 ] 00:08:37.929 }' 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.929 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.500 [2024-11-20 13:22:19.966456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.500 [2024-11-20 13:22:19.966552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.500 [2024-11-20 13:22:19.974460] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.500 [2024-11-20 13:22:19.974556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.500 [2024-11-20 13:22:19.974584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.500 [2024-11-20 13:22:19.974606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.500 [2024-11-20 13:22:19.974624] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.500 [2024-11-20 13:22:19.974644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.500 [2024-11-20 13:22:19.991522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.500 BaseBdev1 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.500 13:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.500 [ 00:08:38.500 { 00:08:38.500 "name": "BaseBdev1", 00:08:38.500 "aliases": [ 00:08:38.500 "ba4b1093-7768-4eaf-a04f-ef42f7367316" 00:08:38.500 ], 00:08:38.500 "product_name": "Malloc disk", 00:08:38.500 "block_size": 512, 00:08:38.500 "num_blocks": 65536, 00:08:38.500 "uuid": "ba4b1093-7768-4eaf-a04f-ef42f7367316", 00:08:38.500 "assigned_rate_limits": { 00:08:38.500 "rw_ios_per_sec": 0, 00:08:38.500 "rw_mbytes_per_sec": 0, 00:08:38.500 "r_mbytes_per_sec": 0, 00:08:38.500 "w_mbytes_per_sec": 0 00:08:38.500 }, 00:08:38.500 "claimed": true, 00:08:38.500 "claim_type": "exclusive_write", 00:08:38.500 "zoned": false, 00:08:38.500 "supported_io_types": { 00:08:38.500 "read": true, 00:08:38.500 "write": true, 00:08:38.500 "unmap": true, 00:08:38.500 "flush": true, 00:08:38.500 "reset": true, 00:08:38.500 "nvme_admin": false, 00:08:38.500 "nvme_io": false, 00:08:38.500 "nvme_io_md": false, 00:08:38.500 "write_zeroes": true, 00:08:38.500 "zcopy": true, 00:08:38.500 "get_zone_info": false, 00:08:38.500 "zone_management": false, 00:08:38.500 "zone_append": false, 00:08:38.500 "compare": false, 00:08:38.500 "compare_and_write": false, 00:08:38.500 "abort": true, 00:08:38.500 "seek_hole": false, 00:08:38.500 "seek_data": false, 00:08:38.500 "copy": true, 00:08:38.500 "nvme_iov_md": false 00:08:38.500 }, 00:08:38.500 "memory_domains": [ 00:08:38.500 { 00:08:38.500 "dma_device_id": "system", 00:08:38.500 "dma_device_type": 1 00:08:38.500 }, 00:08:38.500 { 00:08:38.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.500 "dma_device_type": 2 00:08:38.500 } 00:08:38.500 ], 00:08:38.500 "driver_specific": {} 00:08:38.500 } 00:08:38.500 ] 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.500 "name": "Existed_Raid", 00:08:38.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.500 "strip_size_kb": 64, 00:08:38.500 "state": "configuring", 00:08:38.500 "raid_level": "concat", 00:08:38.500 "superblock": false, 00:08:38.500 "num_base_bdevs": 3, 00:08:38.500 "num_base_bdevs_discovered": 1, 00:08:38.500 "num_base_bdevs_operational": 3, 00:08:38.500 "base_bdevs_list": [ 00:08:38.500 { 00:08:38.500 "name": "BaseBdev1", 00:08:38.500 "uuid": "ba4b1093-7768-4eaf-a04f-ef42f7367316", 00:08:38.500 "is_configured": true, 00:08:38.500 "data_offset": 0, 00:08:38.500 "data_size": 65536 00:08:38.500 }, 00:08:38.500 { 00:08:38.500 "name": "BaseBdev2", 00:08:38.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.500 "is_configured": false, 00:08:38.500 "data_offset": 0, 00:08:38.500 "data_size": 0 00:08:38.500 }, 00:08:38.500 { 00:08:38.500 "name": "BaseBdev3", 00:08:38.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.500 "is_configured": false, 00:08:38.500 "data_offset": 0, 00:08:38.500 "data_size": 0 00:08:38.500 } 00:08:38.500 ] 00:08:38.500 }' 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.500 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.070 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.070 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.070 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.070 [2024-11-20 13:22:20.482751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.070 [2024-11-20 13:22:20.482862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:39.070 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.070 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.070 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.070 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.070 [2024-11-20 13:22:20.490772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.070 [2024-11-20 13:22:20.492703] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.070 [2024-11-20 13:22:20.492813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.071 [2024-11-20 13:22:20.492843] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.071 [2024-11-20 13:22:20.492869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.071 "name": "Existed_Raid", 00:08:39.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.071 "strip_size_kb": 64, 00:08:39.071 "state": "configuring", 00:08:39.071 "raid_level": "concat", 00:08:39.071 "superblock": false, 00:08:39.071 "num_base_bdevs": 3, 00:08:39.071 "num_base_bdevs_discovered": 1, 00:08:39.071 "num_base_bdevs_operational": 3, 00:08:39.071 "base_bdevs_list": [ 00:08:39.071 { 00:08:39.071 "name": "BaseBdev1", 00:08:39.071 "uuid": "ba4b1093-7768-4eaf-a04f-ef42f7367316", 00:08:39.071 "is_configured": true, 00:08:39.071 "data_offset": 0, 00:08:39.071 "data_size": 65536 00:08:39.071 }, 00:08:39.071 { 00:08:39.071 "name": "BaseBdev2", 00:08:39.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.071 "is_configured": false, 00:08:39.071 "data_offset": 0, 00:08:39.071 "data_size": 0 00:08:39.071 }, 00:08:39.071 { 00:08:39.071 "name": "BaseBdev3", 00:08:39.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.071 "is_configured": false, 00:08:39.071 "data_offset": 0, 00:08:39.071 "data_size": 0 00:08:39.071 } 00:08:39.071 ] 00:08:39.071 }' 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.071 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.331 [2024-11-20 13:22:20.965341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.331 BaseBdev2 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.331 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:39.332 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.332 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.332 [ 00:08:39.332 { 00:08:39.332 "name": "BaseBdev2", 00:08:39.332 "aliases": [ 00:08:39.332 "a6c076a3-673c-4b50-bfe9-91dd8cfbe3ac" 00:08:39.332 ], 00:08:39.332 "product_name": "Malloc disk", 00:08:39.332 "block_size": 512, 00:08:39.332 "num_blocks": 65536, 00:08:39.332 "uuid": "a6c076a3-673c-4b50-bfe9-91dd8cfbe3ac", 00:08:39.332 "assigned_rate_limits": { 00:08:39.332 "rw_ios_per_sec": 0, 00:08:39.332 "rw_mbytes_per_sec": 0, 00:08:39.332 "r_mbytes_per_sec": 0, 00:08:39.332 "w_mbytes_per_sec": 0 00:08:39.332 }, 00:08:39.332 "claimed": true, 00:08:39.332 "claim_type": "exclusive_write", 00:08:39.332 "zoned": false, 00:08:39.332 "supported_io_types": { 00:08:39.332 "read": true, 00:08:39.332 "write": true, 00:08:39.332 "unmap": true, 00:08:39.332 "flush": true, 00:08:39.332 "reset": true, 00:08:39.332 "nvme_admin": false, 00:08:39.332 "nvme_io": false, 00:08:39.332 "nvme_io_md": false, 00:08:39.332 "write_zeroes": true, 00:08:39.332 "zcopy": true, 00:08:39.332 "get_zone_info": false, 00:08:39.332 "zone_management": false, 00:08:39.332 "zone_append": false, 00:08:39.332 "compare": false, 00:08:39.332 "compare_and_write": false, 00:08:39.332 "abort": true, 00:08:39.332 "seek_hole": false, 00:08:39.332 "seek_data": false, 00:08:39.332 "copy": true, 00:08:39.332 "nvme_iov_md": false 00:08:39.332 }, 00:08:39.332 "memory_domains": [ 00:08:39.332 { 00:08:39.332 "dma_device_id": "system", 00:08:39.332 "dma_device_type": 1 00:08:39.332 }, 00:08:39.332 { 00:08:39.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.332 "dma_device_type": 2 00:08:39.332 } 00:08:39.332 ], 00:08:39.332 "driver_specific": {} 00:08:39.332 } 00:08:39.332 ] 00:08:39.332 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.592 13:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:39.592 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.592 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.592 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.592 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.592 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.592 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.592 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.592 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.592 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.592 13:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.592 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.592 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.592 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.592 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.592 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.592 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.592 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.592 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.592 "name": "Existed_Raid", 00:08:39.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.592 "strip_size_kb": 64, 00:08:39.592 "state": "configuring", 00:08:39.592 "raid_level": "concat", 00:08:39.592 "superblock": false, 00:08:39.592 "num_base_bdevs": 3, 00:08:39.592 "num_base_bdevs_discovered": 2, 00:08:39.592 "num_base_bdevs_operational": 3, 00:08:39.592 "base_bdevs_list": [ 00:08:39.592 { 00:08:39.592 "name": "BaseBdev1", 00:08:39.592 "uuid": "ba4b1093-7768-4eaf-a04f-ef42f7367316", 00:08:39.592 "is_configured": true, 00:08:39.592 "data_offset": 0, 00:08:39.592 "data_size": 65536 00:08:39.592 }, 00:08:39.592 { 00:08:39.592 "name": "BaseBdev2", 00:08:39.592 "uuid": "a6c076a3-673c-4b50-bfe9-91dd8cfbe3ac", 00:08:39.592 "is_configured": true, 00:08:39.592 "data_offset": 0, 00:08:39.592 "data_size": 65536 00:08:39.592 }, 00:08:39.592 { 00:08:39.592 "name": "BaseBdev3", 00:08:39.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.592 "is_configured": false, 00:08:39.592 "data_offset": 0, 00:08:39.592 "data_size": 0 00:08:39.592 } 00:08:39.592 ] 00:08:39.592 }' 00:08:39.592 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.592 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.852 [2024-11-20 13:22:21.449461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.852 [2024-11-20 13:22:21.449503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:39.852 [2024-11-20 13:22:21.449513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:39.852 [2024-11-20 13:22:21.449776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:39.852 [2024-11-20 13:22:21.449919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:39.852 [2024-11-20 13:22:21.449930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:39.852 [2024-11-20 13:22:21.450174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.852 BaseBdev3 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.852 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.852 [ 00:08:39.852 { 00:08:39.852 "name": "BaseBdev3", 00:08:39.852 "aliases": [ 00:08:39.852 "7bece811-1689-4392-a405-db5ff7e8532c" 00:08:39.852 ], 00:08:39.852 "product_name": "Malloc disk", 00:08:39.852 "block_size": 512, 00:08:39.852 "num_blocks": 65536, 00:08:39.852 "uuid": "7bece811-1689-4392-a405-db5ff7e8532c", 00:08:39.852 "assigned_rate_limits": { 00:08:39.852 "rw_ios_per_sec": 0, 00:08:39.852 "rw_mbytes_per_sec": 0, 00:08:39.852 "r_mbytes_per_sec": 0, 00:08:39.852 "w_mbytes_per_sec": 0 00:08:39.852 }, 00:08:39.852 "claimed": true, 00:08:39.852 "claim_type": "exclusive_write", 00:08:39.852 "zoned": false, 00:08:39.852 "supported_io_types": { 00:08:39.852 "read": true, 00:08:39.852 "write": true, 00:08:39.853 "unmap": true, 00:08:39.853 "flush": true, 00:08:39.853 "reset": true, 00:08:39.853 "nvme_admin": false, 00:08:39.853 "nvme_io": false, 00:08:39.853 "nvme_io_md": false, 00:08:39.853 "write_zeroes": true, 00:08:39.853 "zcopy": true, 00:08:39.853 "get_zone_info": false, 00:08:39.853 "zone_management": false, 00:08:39.853 "zone_append": false, 00:08:39.853 "compare": false, 00:08:39.853 "compare_and_write": false, 00:08:39.853 "abort": true, 00:08:39.853 "seek_hole": false, 00:08:39.853 "seek_data": false, 00:08:39.853 "copy": true, 00:08:39.853 "nvme_iov_md": false 00:08:39.853 }, 00:08:39.853 "memory_domains": [ 00:08:39.853 { 00:08:39.853 "dma_device_id": "system", 00:08:39.853 "dma_device_type": 1 00:08:39.853 }, 00:08:39.853 { 00:08:39.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.853 "dma_device_type": 2 00:08:39.853 } 00:08:39.853 ], 00:08:39.853 "driver_specific": {} 00:08:39.853 } 00:08:39.853 ] 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.853 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.113 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.113 "name": "Existed_Raid", 00:08:40.113 "uuid": "3493b3d4-81ab-4a2c-8528-3bf0edd5ba94", 00:08:40.113 "strip_size_kb": 64, 00:08:40.113 "state": "online", 00:08:40.113 "raid_level": "concat", 00:08:40.113 "superblock": false, 00:08:40.113 "num_base_bdevs": 3, 00:08:40.113 "num_base_bdevs_discovered": 3, 00:08:40.113 "num_base_bdevs_operational": 3, 00:08:40.113 "base_bdevs_list": [ 00:08:40.113 { 00:08:40.113 "name": "BaseBdev1", 00:08:40.113 "uuid": "ba4b1093-7768-4eaf-a04f-ef42f7367316", 00:08:40.113 "is_configured": true, 00:08:40.113 "data_offset": 0, 00:08:40.113 "data_size": 65536 00:08:40.113 }, 00:08:40.113 { 00:08:40.113 "name": "BaseBdev2", 00:08:40.113 "uuid": "a6c076a3-673c-4b50-bfe9-91dd8cfbe3ac", 00:08:40.113 "is_configured": true, 00:08:40.113 "data_offset": 0, 00:08:40.113 "data_size": 65536 00:08:40.113 }, 00:08:40.113 { 00:08:40.113 "name": "BaseBdev3", 00:08:40.113 "uuid": "7bece811-1689-4392-a405-db5ff7e8532c", 00:08:40.113 "is_configured": true, 00:08:40.113 "data_offset": 0, 00:08:40.113 "data_size": 65536 00:08:40.113 } 00:08:40.113 ] 00:08:40.113 }' 00:08:40.113 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.113 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.374 [2024-11-20 13:22:21.941030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.374 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.374 "name": "Existed_Raid", 00:08:40.374 "aliases": [ 00:08:40.374 "3493b3d4-81ab-4a2c-8528-3bf0edd5ba94" 00:08:40.374 ], 00:08:40.374 "product_name": "Raid Volume", 00:08:40.374 "block_size": 512, 00:08:40.374 "num_blocks": 196608, 00:08:40.374 "uuid": "3493b3d4-81ab-4a2c-8528-3bf0edd5ba94", 00:08:40.374 "assigned_rate_limits": { 00:08:40.374 "rw_ios_per_sec": 0, 00:08:40.374 "rw_mbytes_per_sec": 0, 00:08:40.374 "r_mbytes_per_sec": 0, 00:08:40.374 "w_mbytes_per_sec": 0 00:08:40.374 }, 00:08:40.374 "claimed": false, 00:08:40.374 "zoned": false, 00:08:40.374 "supported_io_types": { 00:08:40.374 "read": true, 00:08:40.374 "write": true, 00:08:40.374 "unmap": true, 00:08:40.374 "flush": true, 00:08:40.374 "reset": true, 00:08:40.374 "nvme_admin": false, 00:08:40.374 "nvme_io": false, 00:08:40.374 "nvme_io_md": false, 00:08:40.374 "write_zeroes": true, 00:08:40.374 "zcopy": false, 00:08:40.374 "get_zone_info": false, 00:08:40.375 "zone_management": false, 00:08:40.375 "zone_append": false, 00:08:40.375 "compare": false, 00:08:40.375 "compare_and_write": false, 00:08:40.375 "abort": false, 00:08:40.375 "seek_hole": false, 00:08:40.375 "seek_data": false, 00:08:40.375 "copy": false, 00:08:40.375 "nvme_iov_md": false 00:08:40.375 }, 00:08:40.375 "memory_domains": [ 00:08:40.375 { 00:08:40.375 "dma_device_id": "system", 00:08:40.375 "dma_device_type": 1 00:08:40.375 }, 00:08:40.375 { 00:08:40.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.375 "dma_device_type": 2 00:08:40.375 }, 00:08:40.375 { 00:08:40.375 "dma_device_id": "system", 00:08:40.375 "dma_device_type": 1 00:08:40.375 }, 00:08:40.375 { 00:08:40.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.375 "dma_device_type": 2 00:08:40.375 }, 00:08:40.375 { 00:08:40.375 "dma_device_id": "system", 00:08:40.375 "dma_device_type": 1 00:08:40.375 }, 00:08:40.375 { 00:08:40.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.375 "dma_device_type": 2 00:08:40.375 } 00:08:40.375 ], 00:08:40.375 "driver_specific": { 00:08:40.375 "raid": { 00:08:40.375 "uuid": "3493b3d4-81ab-4a2c-8528-3bf0edd5ba94", 00:08:40.375 "strip_size_kb": 64, 00:08:40.375 "state": "online", 00:08:40.375 "raid_level": "concat", 00:08:40.375 "superblock": false, 00:08:40.375 "num_base_bdevs": 3, 00:08:40.375 "num_base_bdevs_discovered": 3, 00:08:40.375 "num_base_bdevs_operational": 3, 00:08:40.375 "base_bdevs_list": [ 00:08:40.375 { 00:08:40.375 "name": "BaseBdev1", 00:08:40.375 "uuid": "ba4b1093-7768-4eaf-a04f-ef42f7367316", 00:08:40.375 "is_configured": true, 00:08:40.375 "data_offset": 0, 00:08:40.375 "data_size": 65536 00:08:40.375 }, 00:08:40.375 { 00:08:40.375 "name": "BaseBdev2", 00:08:40.375 "uuid": "a6c076a3-673c-4b50-bfe9-91dd8cfbe3ac", 00:08:40.375 "is_configured": true, 00:08:40.375 "data_offset": 0, 00:08:40.375 "data_size": 65536 00:08:40.375 }, 00:08:40.375 { 00:08:40.375 "name": "BaseBdev3", 00:08:40.375 "uuid": "7bece811-1689-4392-a405-db5ff7e8532c", 00:08:40.375 "is_configured": true, 00:08:40.375 "data_offset": 0, 00:08:40.375 "data_size": 65536 00:08:40.375 } 00:08:40.375 ] 00:08:40.375 } 00:08:40.375 } 00:08:40.375 }' 00:08:40.375 13:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.375 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:40.375 BaseBdev2 00:08:40.375 BaseBdev3' 00:08:40.375 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.635 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.635 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.635 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:40.635 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.635 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.635 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.636 [2024-11-20 13:22:22.200307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.636 [2024-11-20 13:22:22.200335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.636 [2024-11-20 13:22:22.200397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.636 "name": "Existed_Raid", 00:08:40.636 "uuid": "3493b3d4-81ab-4a2c-8528-3bf0edd5ba94", 00:08:40.636 "strip_size_kb": 64, 00:08:40.636 "state": "offline", 00:08:40.636 "raid_level": "concat", 00:08:40.636 "superblock": false, 00:08:40.636 "num_base_bdevs": 3, 00:08:40.636 "num_base_bdevs_discovered": 2, 00:08:40.636 "num_base_bdevs_operational": 2, 00:08:40.636 "base_bdevs_list": [ 00:08:40.636 { 00:08:40.636 "name": null, 00:08:40.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.636 "is_configured": false, 00:08:40.636 "data_offset": 0, 00:08:40.636 "data_size": 65536 00:08:40.636 }, 00:08:40.636 { 00:08:40.636 "name": "BaseBdev2", 00:08:40.636 "uuid": "a6c076a3-673c-4b50-bfe9-91dd8cfbe3ac", 00:08:40.636 "is_configured": true, 00:08:40.636 "data_offset": 0, 00:08:40.636 "data_size": 65536 00:08:40.636 }, 00:08:40.636 { 00:08:40.636 "name": "BaseBdev3", 00:08:40.636 "uuid": "7bece811-1689-4392-a405-db5ff7e8532c", 00:08:40.636 "is_configured": true, 00:08:40.636 "data_offset": 0, 00:08:40.636 "data_size": 65536 00:08:40.636 } 00:08:40.636 ] 00:08:40.636 }' 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.636 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.208 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:41.208 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.208 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.208 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.208 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.208 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.208 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.208 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.208 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.208 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 [2024-11-20 13:22:22.671732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 [2024-11-20 13:22:22.723338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:41.209 [2024-11-20 13:22:22.723401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 BaseBdev2 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 [ 00:08:41.209 { 00:08:41.209 "name": "BaseBdev2", 00:08:41.209 "aliases": [ 00:08:41.209 "ff394bd8-f6bb-4441-82e1-4c7a9f9f6abc" 00:08:41.209 ], 00:08:41.209 "product_name": "Malloc disk", 00:08:41.209 "block_size": 512, 00:08:41.209 "num_blocks": 65536, 00:08:41.209 "uuid": "ff394bd8-f6bb-4441-82e1-4c7a9f9f6abc", 00:08:41.209 "assigned_rate_limits": { 00:08:41.209 "rw_ios_per_sec": 0, 00:08:41.209 "rw_mbytes_per_sec": 0, 00:08:41.209 "r_mbytes_per_sec": 0, 00:08:41.209 "w_mbytes_per_sec": 0 00:08:41.209 }, 00:08:41.209 "claimed": false, 00:08:41.209 "zoned": false, 00:08:41.209 "supported_io_types": { 00:08:41.209 "read": true, 00:08:41.209 "write": true, 00:08:41.209 "unmap": true, 00:08:41.209 "flush": true, 00:08:41.209 "reset": true, 00:08:41.209 "nvme_admin": false, 00:08:41.209 "nvme_io": false, 00:08:41.209 "nvme_io_md": false, 00:08:41.209 "write_zeroes": true, 00:08:41.209 "zcopy": true, 00:08:41.209 "get_zone_info": false, 00:08:41.209 "zone_management": false, 00:08:41.209 "zone_append": false, 00:08:41.209 "compare": false, 00:08:41.209 "compare_and_write": false, 00:08:41.209 "abort": true, 00:08:41.209 "seek_hole": false, 00:08:41.209 "seek_data": false, 00:08:41.209 "copy": true, 00:08:41.209 "nvme_iov_md": false 00:08:41.209 }, 00:08:41.209 "memory_domains": [ 00:08:41.209 { 00:08:41.209 "dma_device_id": "system", 00:08:41.209 "dma_device_type": 1 00:08:41.209 }, 00:08:41.209 { 00:08:41.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.209 "dma_device_type": 2 00:08:41.209 } 00:08:41.209 ], 00:08:41.209 "driver_specific": {} 00:08:41.209 } 00:08:41.209 ] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 BaseBdev3 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.209 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.470 [ 00:08:41.470 { 00:08:41.470 "name": "BaseBdev3", 00:08:41.470 "aliases": [ 00:08:41.470 "d670fcdf-e0be-48ec-b028-6b41288c7d27" 00:08:41.470 ], 00:08:41.470 "product_name": "Malloc disk", 00:08:41.470 "block_size": 512, 00:08:41.470 "num_blocks": 65536, 00:08:41.470 "uuid": "d670fcdf-e0be-48ec-b028-6b41288c7d27", 00:08:41.470 "assigned_rate_limits": { 00:08:41.470 "rw_ios_per_sec": 0, 00:08:41.470 "rw_mbytes_per_sec": 0, 00:08:41.470 "r_mbytes_per_sec": 0, 00:08:41.470 "w_mbytes_per_sec": 0 00:08:41.470 }, 00:08:41.470 "claimed": false, 00:08:41.470 "zoned": false, 00:08:41.470 "supported_io_types": { 00:08:41.470 "read": true, 00:08:41.470 "write": true, 00:08:41.470 "unmap": true, 00:08:41.470 "flush": true, 00:08:41.470 "reset": true, 00:08:41.470 "nvme_admin": false, 00:08:41.470 "nvme_io": false, 00:08:41.470 "nvme_io_md": false, 00:08:41.470 "write_zeroes": true, 00:08:41.470 "zcopy": true, 00:08:41.470 "get_zone_info": false, 00:08:41.470 "zone_management": false, 00:08:41.470 "zone_append": false, 00:08:41.470 "compare": false, 00:08:41.470 "compare_and_write": false, 00:08:41.470 "abort": true, 00:08:41.470 "seek_hole": false, 00:08:41.470 "seek_data": false, 00:08:41.470 "copy": true, 00:08:41.470 "nvme_iov_md": false 00:08:41.470 }, 00:08:41.470 "memory_domains": [ 00:08:41.470 { 00:08:41.470 "dma_device_id": "system", 00:08:41.470 "dma_device_type": 1 00:08:41.470 }, 00:08:41.470 { 00:08:41.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.470 "dma_device_type": 2 00:08:41.470 } 00:08:41.470 ], 00:08:41.470 "driver_specific": {} 00:08:41.470 } 00:08:41.470 ] 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.470 [2024-11-20 13:22:22.904543] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.470 [2024-11-20 13:22:22.904597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.470 [2024-11-20 13:22:22.904621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.470 [2024-11-20 13:22:22.906509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.470 "name": "Existed_Raid", 00:08:41.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.470 "strip_size_kb": 64, 00:08:41.470 "state": "configuring", 00:08:41.470 "raid_level": "concat", 00:08:41.470 "superblock": false, 00:08:41.470 "num_base_bdevs": 3, 00:08:41.470 "num_base_bdevs_discovered": 2, 00:08:41.470 "num_base_bdevs_operational": 3, 00:08:41.470 "base_bdevs_list": [ 00:08:41.470 { 00:08:41.470 "name": "BaseBdev1", 00:08:41.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.470 "is_configured": false, 00:08:41.470 "data_offset": 0, 00:08:41.470 "data_size": 0 00:08:41.470 }, 00:08:41.470 { 00:08:41.470 "name": "BaseBdev2", 00:08:41.470 "uuid": "ff394bd8-f6bb-4441-82e1-4c7a9f9f6abc", 00:08:41.470 "is_configured": true, 00:08:41.470 "data_offset": 0, 00:08:41.470 "data_size": 65536 00:08:41.470 }, 00:08:41.470 { 00:08:41.470 "name": "BaseBdev3", 00:08:41.470 "uuid": "d670fcdf-e0be-48ec-b028-6b41288c7d27", 00:08:41.470 "is_configured": true, 00:08:41.470 "data_offset": 0, 00:08:41.470 "data_size": 65536 00:08:41.470 } 00:08:41.470 ] 00:08:41.470 }' 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.470 13:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.731 [2024-11-20 13:22:23.331854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.731 "name": "Existed_Raid", 00:08:41.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.731 "strip_size_kb": 64, 00:08:41.731 "state": "configuring", 00:08:41.731 "raid_level": "concat", 00:08:41.731 "superblock": false, 00:08:41.731 "num_base_bdevs": 3, 00:08:41.731 "num_base_bdevs_discovered": 1, 00:08:41.731 "num_base_bdevs_operational": 3, 00:08:41.731 "base_bdevs_list": [ 00:08:41.731 { 00:08:41.731 "name": "BaseBdev1", 00:08:41.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.731 "is_configured": false, 00:08:41.731 "data_offset": 0, 00:08:41.731 "data_size": 0 00:08:41.731 }, 00:08:41.731 { 00:08:41.731 "name": null, 00:08:41.731 "uuid": "ff394bd8-f6bb-4441-82e1-4c7a9f9f6abc", 00:08:41.731 "is_configured": false, 00:08:41.731 "data_offset": 0, 00:08:41.731 "data_size": 65536 00:08:41.731 }, 00:08:41.731 { 00:08:41.731 "name": "BaseBdev3", 00:08:41.731 "uuid": "d670fcdf-e0be-48ec-b028-6b41288c7d27", 00:08:41.731 "is_configured": true, 00:08:41.731 "data_offset": 0, 00:08:41.731 "data_size": 65536 00:08:41.731 } 00:08:41.731 ] 00:08:41.731 }' 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.731 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.301 [2024-11-20 13:22:23.826168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.301 BaseBdev1 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.301 [ 00:08:42.301 { 00:08:42.301 "name": "BaseBdev1", 00:08:42.301 "aliases": [ 00:08:42.301 "17cf0396-2662-4a28-a5b3-3181d710509b" 00:08:42.301 ], 00:08:42.301 "product_name": "Malloc disk", 00:08:42.301 "block_size": 512, 00:08:42.301 "num_blocks": 65536, 00:08:42.301 "uuid": "17cf0396-2662-4a28-a5b3-3181d710509b", 00:08:42.301 "assigned_rate_limits": { 00:08:42.301 "rw_ios_per_sec": 0, 00:08:42.301 "rw_mbytes_per_sec": 0, 00:08:42.301 "r_mbytes_per_sec": 0, 00:08:42.301 "w_mbytes_per_sec": 0 00:08:42.301 }, 00:08:42.301 "claimed": true, 00:08:42.301 "claim_type": "exclusive_write", 00:08:42.301 "zoned": false, 00:08:42.301 "supported_io_types": { 00:08:42.301 "read": true, 00:08:42.301 "write": true, 00:08:42.301 "unmap": true, 00:08:42.301 "flush": true, 00:08:42.301 "reset": true, 00:08:42.301 "nvme_admin": false, 00:08:42.301 "nvme_io": false, 00:08:42.301 "nvme_io_md": false, 00:08:42.301 "write_zeroes": true, 00:08:42.301 "zcopy": true, 00:08:42.301 "get_zone_info": false, 00:08:42.301 "zone_management": false, 00:08:42.301 "zone_append": false, 00:08:42.301 "compare": false, 00:08:42.301 "compare_and_write": false, 00:08:42.301 "abort": true, 00:08:42.301 "seek_hole": false, 00:08:42.301 "seek_data": false, 00:08:42.301 "copy": true, 00:08:42.301 "nvme_iov_md": false 00:08:42.301 }, 00:08:42.301 "memory_domains": [ 00:08:42.301 { 00:08:42.301 "dma_device_id": "system", 00:08:42.301 "dma_device_type": 1 00:08:42.301 }, 00:08:42.301 { 00:08:42.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.301 "dma_device_type": 2 00:08:42.301 } 00:08:42.301 ], 00:08:42.301 "driver_specific": {} 00:08:42.301 } 00:08:42.301 ] 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.301 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.301 "name": "Existed_Raid", 00:08:42.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.301 "strip_size_kb": 64, 00:08:42.301 "state": "configuring", 00:08:42.302 "raid_level": "concat", 00:08:42.302 "superblock": false, 00:08:42.302 "num_base_bdevs": 3, 00:08:42.302 "num_base_bdevs_discovered": 2, 00:08:42.302 "num_base_bdevs_operational": 3, 00:08:42.302 "base_bdevs_list": [ 00:08:42.302 { 00:08:42.302 "name": "BaseBdev1", 00:08:42.302 "uuid": "17cf0396-2662-4a28-a5b3-3181d710509b", 00:08:42.302 "is_configured": true, 00:08:42.302 "data_offset": 0, 00:08:42.302 "data_size": 65536 00:08:42.302 }, 00:08:42.302 { 00:08:42.302 "name": null, 00:08:42.302 "uuid": "ff394bd8-f6bb-4441-82e1-4c7a9f9f6abc", 00:08:42.302 "is_configured": false, 00:08:42.302 "data_offset": 0, 00:08:42.302 "data_size": 65536 00:08:42.302 }, 00:08:42.302 { 00:08:42.302 "name": "BaseBdev3", 00:08:42.302 "uuid": "d670fcdf-e0be-48ec-b028-6b41288c7d27", 00:08:42.302 "is_configured": true, 00:08:42.302 "data_offset": 0, 00:08:42.302 "data_size": 65536 00:08:42.302 } 00:08:42.302 ] 00:08:42.302 }' 00:08:42.302 13:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.302 13:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.872 [2024-11-20 13:22:24.293454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.872 "name": "Existed_Raid", 00:08:42.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.872 "strip_size_kb": 64, 00:08:42.872 "state": "configuring", 00:08:42.872 "raid_level": "concat", 00:08:42.872 "superblock": false, 00:08:42.872 "num_base_bdevs": 3, 00:08:42.872 "num_base_bdevs_discovered": 1, 00:08:42.872 "num_base_bdevs_operational": 3, 00:08:42.872 "base_bdevs_list": [ 00:08:42.872 { 00:08:42.872 "name": "BaseBdev1", 00:08:42.872 "uuid": "17cf0396-2662-4a28-a5b3-3181d710509b", 00:08:42.872 "is_configured": true, 00:08:42.872 "data_offset": 0, 00:08:42.872 "data_size": 65536 00:08:42.872 }, 00:08:42.872 { 00:08:42.872 "name": null, 00:08:42.872 "uuid": "ff394bd8-f6bb-4441-82e1-4c7a9f9f6abc", 00:08:42.872 "is_configured": false, 00:08:42.872 "data_offset": 0, 00:08:42.872 "data_size": 65536 00:08:42.872 }, 00:08:42.872 { 00:08:42.872 "name": null, 00:08:42.872 "uuid": "d670fcdf-e0be-48ec-b028-6b41288c7d27", 00:08:42.872 "is_configured": false, 00:08:42.872 "data_offset": 0, 00:08:42.872 "data_size": 65536 00:08:42.872 } 00:08:42.872 ] 00:08:42.872 }' 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.872 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.133 [2024-11-20 13:22:24.716727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.133 "name": "Existed_Raid", 00:08:43.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.133 "strip_size_kb": 64, 00:08:43.133 "state": "configuring", 00:08:43.133 "raid_level": "concat", 00:08:43.133 "superblock": false, 00:08:43.133 "num_base_bdevs": 3, 00:08:43.133 "num_base_bdevs_discovered": 2, 00:08:43.133 "num_base_bdevs_operational": 3, 00:08:43.133 "base_bdevs_list": [ 00:08:43.133 { 00:08:43.133 "name": "BaseBdev1", 00:08:43.133 "uuid": "17cf0396-2662-4a28-a5b3-3181d710509b", 00:08:43.133 "is_configured": true, 00:08:43.133 "data_offset": 0, 00:08:43.133 "data_size": 65536 00:08:43.133 }, 00:08:43.133 { 00:08:43.133 "name": null, 00:08:43.133 "uuid": "ff394bd8-f6bb-4441-82e1-4c7a9f9f6abc", 00:08:43.133 "is_configured": false, 00:08:43.133 "data_offset": 0, 00:08:43.133 "data_size": 65536 00:08:43.133 }, 00:08:43.133 { 00:08:43.133 "name": "BaseBdev3", 00:08:43.133 "uuid": "d670fcdf-e0be-48ec-b028-6b41288c7d27", 00:08:43.133 "is_configured": true, 00:08:43.133 "data_offset": 0, 00:08:43.133 "data_size": 65536 00:08:43.133 } 00:08:43.133 ] 00:08:43.133 }' 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.133 13:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.704 [2024-11-20 13:22:25.175982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.704 "name": "Existed_Raid", 00:08:43.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.704 "strip_size_kb": 64, 00:08:43.704 "state": "configuring", 00:08:43.704 "raid_level": "concat", 00:08:43.704 "superblock": false, 00:08:43.704 "num_base_bdevs": 3, 00:08:43.704 "num_base_bdevs_discovered": 1, 00:08:43.704 "num_base_bdevs_operational": 3, 00:08:43.704 "base_bdevs_list": [ 00:08:43.704 { 00:08:43.704 "name": null, 00:08:43.704 "uuid": "17cf0396-2662-4a28-a5b3-3181d710509b", 00:08:43.704 "is_configured": false, 00:08:43.704 "data_offset": 0, 00:08:43.704 "data_size": 65536 00:08:43.704 }, 00:08:43.704 { 00:08:43.704 "name": null, 00:08:43.704 "uuid": "ff394bd8-f6bb-4441-82e1-4c7a9f9f6abc", 00:08:43.704 "is_configured": false, 00:08:43.704 "data_offset": 0, 00:08:43.704 "data_size": 65536 00:08:43.704 }, 00:08:43.704 { 00:08:43.704 "name": "BaseBdev3", 00:08:43.704 "uuid": "d670fcdf-e0be-48ec-b028-6b41288c7d27", 00:08:43.704 "is_configured": true, 00:08:43.704 "data_offset": 0, 00:08:43.704 "data_size": 65536 00:08:43.704 } 00:08:43.704 ] 00:08:43.704 }' 00:08:43.704 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.705 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.964 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:43.964 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.964 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.964 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 [2024-11-20 13:22:25.645884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.225 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.225 "name": "Existed_Raid", 00:08:44.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.225 "strip_size_kb": 64, 00:08:44.225 "state": "configuring", 00:08:44.225 "raid_level": "concat", 00:08:44.225 "superblock": false, 00:08:44.225 "num_base_bdevs": 3, 00:08:44.225 "num_base_bdevs_discovered": 2, 00:08:44.225 "num_base_bdevs_operational": 3, 00:08:44.225 "base_bdevs_list": [ 00:08:44.225 { 00:08:44.225 "name": null, 00:08:44.225 "uuid": "17cf0396-2662-4a28-a5b3-3181d710509b", 00:08:44.225 "is_configured": false, 00:08:44.225 "data_offset": 0, 00:08:44.225 "data_size": 65536 00:08:44.225 }, 00:08:44.225 { 00:08:44.225 "name": "BaseBdev2", 00:08:44.225 "uuid": "ff394bd8-f6bb-4441-82e1-4c7a9f9f6abc", 00:08:44.225 "is_configured": true, 00:08:44.225 "data_offset": 0, 00:08:44.225 "data_size": 65536 00:08:44.225 }, 00:08:44.225 { 00:08:44.225 "name": "BaseBdev3", 00:08:44.225 "uuid": "d670fcdf-e0be-48ec-b028-6b41288c7d27", 00:08:44.225 "is_configured": true, 00:08:44.225 "data_offset": 0, 00:08:44.225 "data_size": 65536 00:08:44.225 } 00:08:44.225 ] 00:08:44.225 }' 00:08:44.225 13:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.225 13:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.485 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.485 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:44.485 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.485 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.485 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.485 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:44.485 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:44.485 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.485 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.485 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 17cf0396-2662-4a28-a5b3-3181d710509b 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.745 [2024-11-20 13:22:26.172136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:44.745 [2024-11-20 13:22:26.172250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:44.745 [2024-11-20 13:22:26.172277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:44.745 [2024-11-20 13:22:26.172557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:44.745 [2024-11-20 13:22:26.172721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:44.745 [2024-11-20 13:22:26.172763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:44.745 [2024-11-20 13:22:26.173003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.745 NewBaseBdev 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.745 [ 00:08:44.745 { 00:08:44.745 "name": "NewBaseBdev", 00:08:44.745 "aliases": [ 00:08:44.745 "17cf0396-2662-4a28-a5b3-3181d710509b" 00:08:44.745 ], 00:08:44.745 "product_name": "Malloc disk", 00:08:44.745 "block_size": 512, 00:08:44.745 "num_blocks": 65536, 00:08:44.745 "uuid": "17cf0396-2662-4a28-a5b3-3181d710509b", 00:08:44.745 "assigned_rate_limits": { 00:08:44.745 "rw_ios_per_sec": 0, 00:08:44.745 "rw_mbytes_per_sec": 0, 00:08:44.745 "r_mbytes_per_sec": 0, 00:08:44.745 "w_mbytes_per_sec": 0 00:08:44.745 }, 00:08:44.745 "claimed": true, 00:08:44.745 "claim_type": "exclusive_write", 00:08:44.745 "zoned": false, 00:08:44.745 "supported_io_types": { 00:08:44.745 "read": true, 00:08:44.745 "write": true, 00:08:44.745 "unmap": true, 00:08:44.745 "flush": true, 00:08:44.745 "reset": true, 00:08:44.745 "nvme_admin": false, 00:08:44.745 "nvme_io": false, 00:08:44.745 "nvme_io_md": false, 00:08:44.745 "write_zeroes": true, 00:08:44.745 "zcopy": true, 00:08:44.745 "get_zone_info": false, 00:08:44.745 "zone_management": false, 00:08:44.745 "zone_append": false, 00:08:44.745 "compare": false, 00:08:44.745 "compare_and_write": false, 00:08:44.745 "abort": true, 00:08:44.745 "seek_hole": false, 00:08:44.745 "seek_data": false, 00:08:44.745 "copy": true, 00:08:44.745 "nvme_iov_md": false 00:08:44.745 }, 00:08:44.745 "memory_domains": [ 00:08:44.745 { 00:08:44.745 "dma_device_id": "system", 00:08:44.745 "dma_device_type": 1 00:08:44.745 }, 00:08:44.745 { 00:08:44.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.745 "dma_device_type": 2 00:08:44.745 } 00:08:44.745 ], 00:08:44.745 "driver_specific": {} 00:08:44.745 } 00:08:44.745 ] 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.745 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.745 "name": "Existed_Raid", 00:08:44.745 "uuid": "2f4470e1-4538-483a-80d6-faba89a94d8b", 00:08:44.745 "strip_size_kb": 64, 00:08:44.745 "state": "online", 00:08:44.745 "raid_level": "concat", 00:08:44.745 "superblock": false, 00:08:44.745 "num_base_bdevs": 3, 00:08:44.745 "num_base_bdevs_discovered": 3, 00:08:44.745 "num_base_bdevs_operational": 3, 00:08:44.745 "base_bdevs_list": [ 00:08:44.745 { 00:08:44.745 "name": "NewBaseBdev", 00:08:44.745 "uuid": "17cf0396-2662-4a28-a5b3-3181d710509b", 00:08:44.745 "is_configured": true, 00:08:44.745 "data_offset": 0, 00:08:44.745 "data_size": 65536 00:08:44.745 }, 00:08:44.745 { 00:08:44.745 "name": "BaseBdev2", 00:08:44.745 "uuid": "ff394bd8-f6bb-4441-82e1-4c7a9f9f6abc", 00:08:44.745 "is_configured": true, 00:08:44.745 "data_offset": 0, 00:08:44.745 "data_size": 65536 00:08:44.745 }, 00:08:44.746 { 00:08:44.746 "name": "BaseBdev3", 00:08:44.746 "uuid": "d670fcdf-e0be-48ec-b028-6b41288c7d27", 00:08:44.746 "is_configured": true, 00:08:44.746 "data_offset": 0, 00:08:44.746 "data_size": 65536 00:08:44.746 } 00:08:44.746 ] 00:08:44.746 }' 00:08:44.746 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.746 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.315 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 [2024-11-20 13:22:26.691969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.316 "name": "Existed_Raid", 00:08:45.316 "aliases": [ 00:08:45.316 "2f4470e1-4538-483a-80d6-faba89a94d8b" 00:08:45.316 ], 00:08:45.316 "product_name": "Raid Volume", 00:08:45.316 "block_size": 512, 00:08:45.316 "num_blocks": 196608, 00:08:45.316 "uuid": "2f4470e1-4538-483a-80d6-faba89a94d8b", 00:08:45.316 "assigned_rate_limits": { 00:08:45.316 "rw_ios_per_sec": 0, 00:08:45.316 "rw_mbytes_per_sec": 0, 00:08:45.316 "r_mbytes_per_sec": 0, 00:08:45.316 "w_mbytes_per_sec": 0 00:08:45.316 }, 00:08:45.316 "claimed": false, 00:08:45.316 "zoned": false, 00:08:45.316 "supported_io_types": { 00:08:45.316 "read": true, 00:08:45.316 "write": true, 00:08:45.316 "unmap": true, 00:08:45.316 "flush": true, 00:08:45.316 "reset": true, 00:08:45.316 "nvme_admin": false, 00:08:45.316 "nvme_io": false, 00:08:45.316 "nvme_io_md": false, 00:08:45.316 "write_zeroes": true, 00:08:45.316 "zcopy": false, 00:08:45.316 "get_zone_info": false, 00:08:45.316 "zone_management": false, 00:08:45.316 "zone_append": false, 00:08:45.316 "compare": false, 00:08:45.316 "compare_and_write": false, 00:08:45.316 "abort": false, 00:08:45.316 "seek_hole": false, 00:08:45.316 "seek_data": false, 00:08:45.316 "copy": false, 00:08:45.316 "nvme_iov_md": false 00:08:45.316 }, 00:08:45.316 "memory_domains": [ 00:08:45.316 { 00:08:45.316 "dma_device_id": "system", 00:08:45.316 "dma_device_type": 1 00:08:45.316 }, 00:08:45.316 { 00:08:45.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.316 "dma_device_type": 2 00:08:45.316 }, 00:08:45.316 { 00:08:45.316 "dma_device_id": "system", 00:08:45.316 "dma_device_type": 1 00:08:45.316 }, 00:08:45.316 { 00:08:45.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.316 "dma_device_type": 2 00:08:45.316 }, 00:08:45.316 { 00:08:45.316 "dma_device_id": "system", 00:08:45.316 "dma_device_type": 1 00:08:45.316 }, 00:08:45.316 { 00:08:45.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.316 "dma_device_type": 2 00:08:45.316 } 00:08:45.316 ], 00:08:45.316 "driver_specific": { 00:08:45.316 "raid": { 00:08:45.316 "uuid": "2f4470e1-4538-483a-80d6-faba89a94d8b", 00:08:45.316 "strip_size_kb": 64, 00:08:45.316 "state": "online", 00:08:45.316 "raid_level": "concat", 00:08:45.316 "superblock": false, 00:08:45.316 "num_base_bdevs": 3, 00:08:45.316 "num_base_bdevs_discovered": 3, 00:08:45.316 "num_base_bdevs_operational": 3, 00:08:45.316 "base_bdevs_list": [ 00:08:45.316 { 00:08:45.316 "name": "NewBaseBdev", 00:08:45.316 "uuid": "17cf0396-2662-4a28-a5b3-3181d710509b", 00:08:45.316 "is_configured": true, 00:08:45.316 "data_offset": 0, 00:08:45.316 "data_size": 65536 00:08:45.316 }, 00:08:45.316 { 00:08:45.316 "name": "BaseBdev2", 00:08:45.316 "uuid": "ff394bd8-f6bb-4441-82e1-4c7a9f9f6abc", 00:08:45.316 "is_configured": true, 00:08:45.316 "data_offset": 0, 00:08:45.316 "data_size": 65536 00:08:45.316 }, 00:08:45.316 { 00:08:45.316 "name": "BaseBdev3", 00:08:45.316 "uuid": "d670fcdf-e0be-48ec-b028-6b41288c7d27", 00:08:45.316 "is_configured": true, 00:08:45.316 "data_offset": 0, 00:08:45.316 "data_size": 65536 00:08:45.316 } 00:08:45.316 ] 00:08:45.316 } 00:08:45.316 } 00:08:45.316 }' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:45.316 BaseBdev2 00:08:45.316 BaseBdev3' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.316 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 [2024-11-20 13:22:26.939237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.317 [2024-11-20 13:22:26.939308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.317 [2024-11-20 13:22:26.939410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.317 [2024-11-20 13:22:26.939503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.317 [2024-11-20 13:22:26.939566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:45.317 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.317 13:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76446 00:08:45.317 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 76446 ']' 00:08:45.317 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 76446 00:08:45.317 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:45.317 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.317 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76446 00:08:45.577 killing process with pid 76446 00:08:45.577 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.577 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.577 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76446' 00:08:45.577 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 76446 00:08:45.577 [2024-11-20 13:22:26.989330] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.577 13:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 76446 00:08:45.577 [2024-11-20 13:22:27.020816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.577 13:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:45.577 00:08:45.577 real 0m8.652s 00:08:45.577 user 0m14.813s 00:08:45.577 sys 0m1.710s 00:08:45.577 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.577 13:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.577 ************************************ 00:08:45.577 END TEST raid_state_function_test 00:08:45.577 ************************************ 00:08:45.837 13:22:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:45.837 13:22:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:45.837 13:22:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.837 13:22:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.837 ************************************ 00:08:45.837 START TEST raid_state_function_test_sb 00:08:45.837 ************************************ 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77045 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77045' 00:08:45.837 Process raid pid: 77045 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77045 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77045 ']' 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.837 13:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.837 [2024-11-20 13:22:27.397917] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:45.837 [2024-11-20 13:22:27.398163] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.097 [2024-11-20 13:22:27.554539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.097 [2024-11-20 13:22:27.580085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.097 [2024-11-20 13:22:27.623141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.097 [2024-11-20 13:22:27.623272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.667 [2024-11-20 13:22:28.224916] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.667 [2024-11-20 13:22:28.225038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.667 [2024-11-20 13:22:28.225069] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.667 [2024-11-20 13:22:28.225092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.667 [2024-11-20 13:22:28.225109] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.667 [2024-11-20 13:22:28.225132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.667 "name": "Existed_Raid", 00:08:46.667 "uuid": "04998790-ffe7-433c-96b8-2e16db1c0c7f", 00:08:46.667 "strip_size_kb": 64, 00:08:46.667 "state": "configuring", 00:08:46.667 "raid_level": "concat", 00:08:46.667 "superblock": true, 00:08:46.667 "num_base_bdevs": 3, 00:08:46.667 "num_base_bdevs_discovered": 0, 00:08:46.667 "num_base_bdevs_operational": 3, 00:08:46.667 "base_bdevs_list": [ 00:08:46.667 { 00:08:46.667 "name": "BaseBdev1", 00:08:46.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.667 "is_configured": false, 00:08:46.667 "data_offset": 0, 00:08:46.667 "data_size": 0 00:08:46.667 }, 00:08:46.667 { 00:08:46.667 "name": "BaseBdev2", 00:08:46.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.667 "is_configured": false, 00:08:46.667 "data_offset": 0, 00:08:46.667 "data_size": 0 00:08:46.667 }, 00:08:46.667 { 00:08:46.667 "name": "BaseBdev3", 00:08:46.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.667 "is_configured": false, 00:08:46.667 "data_offset": 0, 00:08:46.667 "data_size": 0 00:08:46.667 } 00:08:46.667 ] 00:08:46.667 }' 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.667 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.237 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.237 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 [2024-11-20 13:22:28.688027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.238 [2024-11-20 13:22:28.688105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 [2024-11-20 13:22:28.700023] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.238 [2024-11-20 13:22:28.700102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.238 [2024-11-20 13:22:28.700129] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.238 [2024-11-20 13:22:28.700151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.238 [2024-11-20 13:22:28.700168] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.238 [2024-11-20 13:22:28.700188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 [2024-11-20 13:22:28.720984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.238 BaseBdev1 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 [ 00:08:47.238 { 00:08:47.238 "name": "BaseBdev1", 00:08:47.238 "aliases": [ 00:08:47.238 "84a2b8c3-b64e-4700-9528-91dd990139cf" 00:08:47.238 ], 00:08:47.238 "product_name": "Malloc disk", 00:08:47.238 "block_size": 512, 00:08:47.238 "num_blocks": 65536, 00:08:47.238 "uuid": "84a2b8c3-b64e-4700-9528-91dd990139cf", 00:08:47.238 "assigned_rate_limits": { 00:08:47.238 "rw_ios_per_sec": 0, 00:08:47.238 "rw_mbytes_per_sec": 0, 00:08:47.238 "r_mbytes_per_sec": 0, 00:08:47.238 "w_mbytes_per_sec": 0 00:08:47.238 }, 00:08:47.238 "claimed": true, 00:08:47.238 "claim_type": "exclusive_write", 00:08:47.238 "zoned": false, 00:08:47.238 "supported_io_types": { 00:08:47.238 "read": true, 00:08:47.238 "write": true, 00:08:47.238 "unmap": true, 00:08:47.238 "flush": true, 00:08:47.238 "reset": true, 00:08:47.238 "nvme_admin": false, 00:08:47.238 "nvme_io": false, 00:08:47.238 "nvme_io_md": false, 00:08:47.238 "write_zeroes": true, 00:08:47.238 "zcopy": true, 00:08:47.238 "get_zone_info": false, 00:08:47.238 "zone_management": false, 00:08:47.238 "zone_append": false, 00:08:47.238 "compare": false, 00:08:47.238 "compare_and_write": false, 00:08:47.238 "abort": true, 00:08:47.238 "seek_hole": false, 00:08:47.238 "seek_data": false, 00:08:47.238 "copy": true, 00:08:47.238 "nvme_iov_md": false 00:08:47.238 }, 00:08:47.238 "memory_domains": [ 00:08:47.238 { 00:08:47.238 "dma_device_id": "system", 00:08:47.238 "dma_device_type": 1 00:08:47.238 }, 00:08:47.238 { 00:08:47.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.238 "dma_device_type": 2 00:08:47.238 } 00:08:47.238 ], 00:08:47.238 "driver_specific": {} 00:08:47.238 } 00:08:47.238 ] 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.238 "name": "Existed_Raid", 00:08:47.238 "uuid": "6c96b1cb-7e98-46cc-8f58-dba13cdc1d95", 00:08:47.238 "strip_size_kb": 64, 00:08:47.238 "state": "configuring", 00:08:47.238 "raid_level": "concat", 00:08:47.238 "superblock": true, 00:08:47.238 "num_base_bdevs": 3, 00:08:47.238 "num_base_bdevs_discovered": 1, 00:08:47.238 "num_base_bdevs_operational": 3, 00:08:47.238 "base_bdevs_list": [ 00:08:47.238 { 00:08:47.238 "name": "BaseBdev1", 00:08:47.238 "uuid": "84a2b8c3-b64e-4700-9528-91dd990139cf", 00:08:47.238 "is_configured": true, 00:08:47.238 "data_offset": 2048, 00:08:47.238 "data_size": 63488 00:08:47.238 }, 00:08:47.238 { 00:08:47.238 "name": "BaseBdev2", 00:08:47.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.238 "is_configured": false, 00:08:47.238 "data_offset": 0, 00:08:47.238 "data_size": 0 00:08:47.238 }, 00:08:47.238 { 00:08:47.238 "name": "BaseBdev3", 00:08:47.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.238 "is_configured": false, 00:08:47.238 "data_offset": 0, 00:08:47.238 "data_size": 0 00:08:47.238 } 00:08:47.238 ] 00:08:47.238 }' 00:08:47.238 13:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.239 13:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.808 [2024-11-20 13:22:29.240150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.808 [2024-11-20 13:22:29.240258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.808 [2024-11-20 13:22:29.252168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.808 [2024-11-20 13:22:29.254045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.808 [2024-11-20 13:22:29.254122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.808 [2024-11-20 13:22:29.254150] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.808 [2024-11-20 13:22:29.254175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.808 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.808 "name": "Existed_Raid", 00:08:47.808 "uuid": "671041c4-f6ed-4286-985a-8064e59df85b", 00:08:47.808 "strip_size_kb": 64, 00:08:47.808 "state": "configuring", 00:08:47.808 "raid_level": "concat", 00:08:47.808 "superblock": true, 00:08:47.808 "num_base_bdevs": 3, 00:08:47.808 "num_base_bdevs_discovered": 1, 00:08:47.808 "num_base_bdevs_operational": 3, 00:08:47.808 "base_bdevs_list": [ 00:08:47.808 { 00:08:47.808 "name": "BaseBdev1", 00:08:47.808 "uuid": "84a2b8c3-b64e-4700-9528-91dd990139cf", 00:08:47.808 "is_configured": true, 00:08:47.808 "data_offset": 2048, 00:08:47.808 "data_size": 63488 00:08:47.808 }, 00:08:47.808 { 00:08:47.808 "name": "BaseBdev2", 00:08:47.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.809 "is_configured": false, 00:08:47.809 "data_offset": 0, 00:08:47.809 "data_size": 0 00:08:47.809 }, 00:08:47.809 { 00:08:47.809 "name": "BaseBdev3", 00:08:47.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.809 "is_configured": false, 00:08:47.809 "data_offset": 0, 00:08:47.809 "data_size": 0 00:08:47.809 } 00:08:47.809 ] 00:08:47.809 }' 00:08:47.809 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.809 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.069 [2024-11-20 13:22:29.718380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.069 BaseBdev2 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.069 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.329 [ 00:08:48.329 { 00:08:48.329 "name": "BaseBdev2", 00:08:48.329 "aliases": [ 00:08:48.329 "9d2d7dfd-0c3b-4a7f-a823-fefa7a7fa1bc" 00:08:48.329 ], 00:08:48.329 "product_name": "Malloc disk", 00:08:48.329 "block_size": 512, 00:08:48.329 "num_blocks": 65536, 00:08:48.329 "uuid": "9d2d7dfd-0c3b-4a7f-a823-fefa7a7fa1bc", 00:08:48.329 "assigned_rate_limits": { 00:08:48.329 "rw_ios_per_sec": 0, 00:08:48.329 "rw_mbytes_per_sec": 0, 00:08:48.329 "r_mbytes_per_sec": 0, 00:08:48.329 "w_mbytes_per_sec": 0 00:08:48.329 }, 00:08:48.329 "claimed": true, 00:08:48.329 "claim_type": "exclusive_write", 00:08:48.329 "zoned": false, 00:08:48.329 "supported_io_types": { 00:08:48.329 "read": true, 00:08:48.329 "write": true, 00:08:48.329 "unmap": true, 00:08:48.329 "flush": true, 00:08:48.329 "reset": true, 00:08:48.329 "nvme_admin": false, 00:08:48.329 "nvme_io": false, 00:08:48.329 "nvme_io_md": false, 00:08:48.329 "write_zeroes": true, 00:08:48.329 "zcopy": true, 00:08:48.329 "get_zone_info": false, 00:08:48.329 "zone_management": false, 00:08:48.329 "zone_append": false, 00:08:48.329 "compare": false, 00:08:48.329 "compare_and_write": false, 00:08:48.329 "abort": true, 00:08:48.329 "seek_hole": false, 00:08:48.329 "seek_data": false, 00:08:48.329 "copy": true, 00:08:48.329 "nvme_iov_md": false 00:08:48.329 }, 00:08:48.329 "memory_domains": [ 00:08:48.329 { 00:08:48.329 "dma_device_id": "system", 00:08:48.329 "dma_device_type": 1 00:08:48.329 }, 00:08:48.329 { 00:08:48.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.329 "dma_device_type": 2 00:08:48.329 } 00:08:48.329 ], 00:08:48.329 "driver_specific": {} 00:08:48.329 } 00:08:48.329 ] 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.329 "name": "Existed_Raid", 00:08:48.329 "uuid": "671041c4-f6ed-4286-985a-8064e59df85b", 00:08:48.329 "strip_size_kb": 64, 00:08:48.329 "state": "configuring", 00:08:48.329 "raid_level": "concat", 00:08:48.329 "superblock": true, 00:08:48.329 "num_base_bdevs": 3, 00:08:48.329 "num_base_bdevs_discovered": 2, 00:08:48.329 "num_base_bdevs_operational": 3, 00:08:48.329 "base_bdevs_list": [ 00:08:48.329 { 00:08:48.329 "name": "BaseBdev1", 00:08:48.329 "uuid": "84a2b8c3-b64e-4700-9528-91dd990139cf", 00:08:48.329 "is_configured": true, 00:08:48.329 "data_offset": 2048, 00:08:48.329 "data_size": 63488 00:08:48.329 }, 00:08:48.329 { 00:08:48.329 "name": "BaseBdev2", 00:08:48.329 "uuid": "9d2d7dfd-0c3b-4a7f-a823-fefa7a7fa1bc", 00:08:48.329 "is_configured": true, 00:08:48.329 "data_offset": 2048, 00:08:48.329 "data_size": 63488 00:08:48.329 }, 00:08:48.329 { 00:08:48.329 "name": "BaseBdev3", 00:08:48.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.329 "is_configured": false, 00:08:48.329 "data_offset": 0, 00:08:48.329 "data_size": 0 00:08:48.329 } 00:08:48.329 ] 00:08:48.329 }' 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.329 13:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.589 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:48.589 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.589 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.589 [2024-11-20 13:22:30.239532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.589 [2024-11-20 13:22:30.239864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:48.589 [2024-11-20 13:22:30.239931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.589 [2024-11-20 13:22:30.240316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:48.589 BaseBdev3 00:08:48.589 [2024-11-20 13:22:30.240523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:48.589 [2024-11-20 13:22:30.240544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:48.589 [2024-11-20 13:22:30.240697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.589 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.589 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:48.589 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:48.589 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.589 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.589 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.589 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.590 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.590 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.590 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.590 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.590 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.590 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.590 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.849 [ 00:08:48.849 { 00:08:48.849 "name": "BaseBdev3", 00:08:48.849 "aliases": [ 00:08:48.849 "eb8fb997-5909-4c55-9b5c-9467b9dc10fd" 00:08:48.849 ], 00:08:48.849 "product_name": "Malloc disk", 00:08:48.849 "block_size": 512, 00:08:48.849 "num_blocks": 65536, 00:08:48.849 "uuid": "eb8fb997-5909-4c55-9b5c-9467b9dc10fd", 00:08:48.849 "assigned_rate_limits": { 00:08:48.849 "rw_ios_per_sec": 0, 00:08:48.849 "rw_mbytes_per_sec": 0, 00:08:48.849 "r_mbytes_per_sec": 0, 00:08:48.849 "w_mbytes_per_sec": 0 00:08:48.849 }, 00:08:48.849 "claimed": true, 00:08:48.849 "claim_type": "exclusive_write", 00:08:48.849 "zoned": false, 00:08:48.849 "supported_io_types": { 00:08:48.849 "read": true, 00:08:48.849 "write": true, 00:08:48.849 "unmap": true, 00:08:48.849 "flush": true, 00:08:48.849 "reset": true, 00:08:48.849 "nvme_admin": false, 00:08:48.849 "nvme_io": false, 00:08:48.849 "nvme_io_md": false, 00:08:48.850 "write_zeroes": true, 00:08:48.850 "zcopy": true, 00:08:48.850 "get_zone_info": false, 00:08:48.850 "zone_management": false, 00:08:48.850 "zone_append": false, 00:08:48.850 "compare": false, 00:08:48.850 "compare_and_write": false, 00:08:48.850 "abort": true, 00:08:48.850 "seek_hole": false, 00:08:48.850 "seek_data": false, 00:08:48.850 "copy": true, 00:08:48.850 "nvme_iov_md": false 00:08:48.850 }, 00:08:48.850 "memory_domains": [ 00:08:48.850 { 00:08:48.850 "dma_device_id": "system", 00:08:48.850 "dma_device_type": 1 00:08:48.850 }, 00:08:48.850 { 00:08:48.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.850 "dma_device_type": 2 00:08:48.850 } 00:08:48.850 ], 00:08:48.850 "driver_specific": {} 00:08:48.850 } 00:08:48.850 ] 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.850 "name": "Existed_Raid", 00:08:48.850 "uuid": "671041c4-f6ed-4286-985a-8064e59df85b", 00:08:48.850 "strip_size_kb": 64, 00:08:48.850 "state": "online", 00:08:48.850 "raid_level": "concat", 00:08:48.850 "superblock": true, 00:08:48.850 "num_base_bdevs": 3, 00:08:48.850 "num_base_bdevs_discovered": 3, 00:08:48.850 "num_base_bdevs_operational": 3, 00:08:48.850 "base_bdevs_list": [ 00:08:48.850 { 00:08:48.850 "name": "BaseBdev1", 00:08:48.850 "uuid": "84a2b8c3-b64e-4700-9528-91dd990139cf", 00:08:48.850 "is_configured": true, 00:08:48.850 "data_offset": 2048, 00:08:48.850 "data_size": 63488 00:08:48.850 }, 00:08:48.850 { 00:08:48.850 "name": "BaseBdev2", 00:08:48.850 "uuid": "9d2d7dfd-0c3b-4a7f-a823-fefa7a7fa1bc", 00:08:48.850 "is_configured": true, 00:08:48.850 "data_offset": 2048, 00:08:48.850 "data_size": 63488 00:08:48.850 }, 00:08:48.850 { 00:08:48.850 "name": "BaseBdev3", 00:08:48.850 "uuid": "eb8fb997-5909-4c55-9b5c-9467b9dc10fd", 00:08:48.850 "is_configured": true, 00:08:48.850 "data_offset": 2048, 00:08:48.850 "data_size": 63488 00:08:48.850 } 00:08:48.850 ] 00:08:48.850 }' 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.850 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.110 [2024-11-20 13:22:30.731060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.110 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.110 "name": "Existed_Raid", 00:08:49.110 "aliases": [ 00:08:49.110 "671041c4-f6ed-4286-985a-8064e59df85b" 00:08:49.110 ], 00:08:49.110 "product_name": "Raid Volume", 00:08:49.110 "block_size": 512, 00:08:49.110 "num_blocks": 190464, 00:08:49.110 "uuid": "671041c4-f6ed-4286-985a-8064e59df85b", 00:08:49.110 "assigned_rate_limits": { 00:08:49.110 "rw_ios_per_sec": 0, 00:08:49.110 "rw_mbytes_per_sec": 0, 00:08:49.110 "r_mbytes_per_sec": 0, 00:08:49.110 "w_mbytes_per_sec": 0 00:08:49.110 }, 00:08:49.110 "claimed": false, 00:08:49.110 "zoned": false, 00:08:49.110 "supported_io_types": { 00:08:49.110 "read": true, 00:08:49.110 "write": true, 00:08:49.110 "unmap": true, 00:08:49.110 "flush": true, 00:08:49.110 "reset": true, 00:08:49.110 "nvme_admin": false, 00:08:49.110 "nvme_io": false, 00:08:49.110 "nvme_io_md": false, 00:08:49.110 "write_zeroes": true, 00:08:49.110 "zcopy": false, 00:08:49.110 "get_zone_info": false, 00:08:49.110 "zone_management": false, 00:08:49.110 "zone_append": false, 00:08:49.110 "compare": false, 00:08:49.110 "compare_and_write": false, 00:08:49.110 "abort": false, 00:08:49.110 "seek_hole": false, 00:08:49.110 "seek_data": false, 00:08:49.110 "copy": false, 00:08:49.110 "nvme_iov_md": false 00:08:49.110 }, 00:08:49.110 "memory_domains": [ 00:08:49.110 { 00:08:49.110 "dma_device_id": "system", 00:08:49.110 "dma_device_type": 1 00:08:49.110 }, 00:08:49.110 { 00:08:49.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.110 "dma_device_type": 2 00:08:49.111 }, 00:08:49.111 { 00:08:49.111 "dma_device_id": "system", 00:08:49.111 "dma_device_type": 1 00:08:49.111 }, 00:08:49.111 { 00:08:49.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.111 "dma_device_type": 2 00:08:49.111 }, 00:08:49.111 { 00:08:49.111 "dma_device_id": "system", 00:08:49.111 "dma_device_type": 1 00:08:49.111 }, 00:08:49.111 { 00:08:49.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.111 "dma_device_type": 2 00:08:49.111 } 00:08:49.111 ], 00:08:49.111 "driver_specific": { 00:08:49.111 "raid": { 00:08:49.111 "uuid": "671041c4-f6ed-4286-985a-8064e59df85b", 00:08:49.111 "strip_size_kb": 64, 00:08:49.111 "state": "online", 00:08:49.111 "raid_level": "concat", 00:08:49.111 "superblock": true, 00:08:49.111 "num_base_bdevs": 3, 00:08:49.111 "num_base_bdevs_discovered": 3, 00:08:49.111 "num_base_bdevs_operational": 3, 00:08:49.111 "base_bdevs_list": [ 00:08:49.111 { 00:08:49.111 "name": "BaseBdev1", 00:08:49.111 "uuid": "84a2b8c3-b64e-4700-9528-91dd990139cf", 00:08:49.111 "is_configured": true, 00:08:49.111 "data_offset": 2048, 00:08:49.111 "data_size": 63488 00:08:49.111 }, 00:08:49.111 { 00:08:49.111 "name": "BaseBdev2", 00:08:49.111 "uuid": "9d2d7dfd-0c3b-4a7f-a823-fefa7a7fa1bc", 00:08:49.111 "is_configured": true, 00:08:49.111 "data_offset": 2048, 00:08:49.111 "data_size": 63488 00:08:49.111 }, 00:08:49.111 { 00:08:49.111 "name": "BaseBdev3", 00:08:49.111 "uuid": "eb8fb997-5909-4c55-9b5c-9467b9dc10fd", 00:08:49.111 "is_configured": true, 00:08:49.111 "data_offset": 2048, 00:08:49.111 "data_size": 63488 00:08:49.111 } 00:08:49.111 ] 00:08:49.111 } 00:08:49.111 } 00:08:49.111 }' 00:08:49.111 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:49.372 BaseBdev2 00:08:49.372 BaseBdev3' 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.372 13:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.372 [2024-11-20 13:22:31.018285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.372 [2024-11-20 13:22:31.018357] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.372 [2024-11-20 13:22:31.018430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.372 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.645 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.645 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.645 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.645 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.645 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.645 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.645 "name": "Existed_Raid", 00:08:49.645 "uuid": "671041c4-f6ed-4286-985a-8064e59df85b", 00:08:49.645 "strip_size_kb": 64, 00:08:49.645 "state": "offline", 00:08:49.645 "raid_level": "concat", 00:08:49.645 "superblock": true, 00:08:49.645 "num_base_bdevs": 3, 00:08:49.645 "num_base_bdevs_discovered": 2, 00:08:49.645 "num_base_bdevs_operational": 2, 00:08:49.645 "base_bdevs_list": [ 00:08:49.645 { 00:08:49.645 "name": null, 00:08:49.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.645 "is_configured": false, 00:08:49.645 "data_offset": 0, 00:08:49.645 "data_size": 63488 00:08:49.645 }, 00:08:49.645 { 00:08:49.645 "name": "BaseBdev2", 00:08:49.645 "uuid": "9d2d7dfd-0c3b-4a7f-a823-fefa7a7fa1bc", 00:08:49.645 "is_configured": true, 00:08:49.645 "data_offset": 2048, 00:08:49.645 "data_size": 63488 00:08:49.645 }, 00:08:49.645 { 00:08:49.645 "name": "BaseBdev3", 00:08:49.645 "uuid": "eb8fb997-5909-4c55-9b5c-9467b9dc10fd", 00:08:49.645 "is_configured": true, 00:08:49.645 "data_offset": 2048, 00:08:49.645 "data_size": 63488 00:08:49.645 } 00:08:49.645 ] 00:08:49.645 }' 00:08:49.645 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.645 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 [2024-11-20 13:22:31.524887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.918 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 [2024-11-20 13:22:31.576109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:49.918 [2024-11-20 13:22:31.576198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.179 BaseBdev2 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.179 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.180 [ 00:08:50.180 { 00:08:50.180 "name": "BaseBdev2", 00:08:50.180 "aliases": [ 00:08:50.180 "0615abce-7f89-4dee-ab92-065ae110f3fc" 00:08:50.180 ], 00:08:50.180 "product_name": "Malloc disk", 00:08:50.180 "block_size": 512, 00:08:50.180 "num_blocks": 65536, 00:08:50.180 "uuid": "0615abce-7f89-4dee-ab92-065ae110f3fc", 00:08:50.180 "assigned_rate_limits": { 00:08:50.180 "rw_ios_per_sec": 0, 00:08:50.180 "rw_mbytes_per_sec": 0, 00:08:50.180 "r_mbytes_per_sec": 0, 00:08:50.180 "w_mbytes_per_sec": 0 00:08:50.180 }, 00:08:50.180 "claimed": false, 00:08:50.180 "zoned": false, 00:08:50.180 "supported_io_types": { 00:08:50.180 "read": true, 00:08:50.180 "write": true, 00:08:50.180 "unmap": true, 00:08:50.180 "flush": true, 00:08:50.180 "reset": true, 00:08:50.180 "nvme_admin": false, 00:08:50.180 "nvme_io": false, 00:08:50.180 "nvme_io_md": false, 00:08:50.180 "write_zeroes": true, 00:08:50.180 "zcopy": true, 00:08:50.180 "get_zone_info": false, 00:08:50.180 "zone_management": false, 00:08:50.180 "zone_append": false, 00:08:50.180 "compare": false, 00:08:50.180 "compare_and_write": false, 00:08:50.180 "abort": true, 00:08:50.180 "seek_hole": false, 00:08:50.180 "seek_data": false, 00:08:50.180 "copy": true, 00:08:50.180 "nvme_iov_md": false 00:08:50.180 }, 00:08:50.180 "memory_domains": [ 00:08:50.180 { 00:08:50.180 "dma_device_id": "system", 00:08:50.180 "dma_device_type": 1 00:08:50.180 }, 00:08:50.180 { 00:08:50.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.180 "dma_device_type": 2 00:08:50.180 } 00:08:50.180 ], 00:08:50.180 "driver_specific": {} 00:08:50.180 } 00:08:50.180 ] 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.180 BaseBdev3 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.180 [ 00:08:50.180 { 00:08:50.180 "name": "BaseBdev3", 00:08:50.180 "aliases": [ 00:08:50.180 "a74e7e4a-ced8-4527-abb2-fbbad9ffa938" 00:08:50.180 ], 00:08:50.180 "product_name": "Malloc disk", 00:08:50.180 "block_size": 512, 00:08:50.180 "num_blocks": 65536, 00:08:50.180 "uuid": "a74e7e4a-ced8-4527-abb2-fbbad9ffa938", 00:08:50.180 "assigned_rate_limits": { 00:08:50.180 "rw_ios_per_sec": 0, 00:08:50.180 "rw_mbytes_per_sec": 0, 00:08:50.180 "r_mbytes_per_sec": 0, 00:08:50.180 "w_mbytes_per_sec": 0 00:08:50.180 }, 00:08:50.180 "claimed": false, 00:08:50.180 "zoned": false, 00:08:50.180 "supported_io_types": { 00:08:50.180 "read": true, 00:08:50.180 "write": true, 00:08:50.180 "unmap": true, 00:08:50.180 "flush": true, 00:08:50.180 "reset": true, 00:08:50.180 "nvme_admin": false, 00:08:50.180 "nvme_io": false, 00:08:50.180 "nvme_io_md": false, 00:08:50.180 "write_zeroes": true, 00:08:50.180 "zcopy": true, 00:08:50.180 "get_zone_info": false, 00:08:50.180 "zone_management": false, 00:08:50.180 "zone_append": false, 00:08:50.180 "compare": false, 00:08:50.180 "compare_and_write": false, 00:08:50.180 "abort": true, 00:08:50.180 "seek_hole": false, 00:08:50.180 "seek_data": false, 00:08:50.180 "copy": true, 00:08:50.180 "nvme_iov_md": false 00:08:50.180 }, 00:08:50.180 "memory_domains": [ 00:08:50.180 { 00:08:50.180 "dma_device_id": "system", 00:08:50.180 "dma_device_type": 1 00:08:50.180 }, 00:08:50.180 { 00:08:50.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.180 "dma_device_type": 2 00:08:50.180 } 00:08:50.180 ], 00:08:50.180 "driver_specific": {} 00:08:50.180 } 00:08:50.180 ] 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.180 [2024-11-20 13:22:31.752447] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.180 [2024-11-20 13:22:31.752536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.180 [2024-11-20 13:22:31.752594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.180 [2024-11-20 13:22:31.754455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.180 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.180 "name": "Existed_Raid", 00:08:50.180 "uuid": "5b29db10-e123-47e1-afe4-1f17fb186087", 00:08:50.180 "strip_size_kb": 64, 00:08:50.180 "state": "configuring", 00:08:50.180 "raid_level": "concat", 00:08:50.180 "superblock": true, 00:08:50.180 "num_base_bdevs": 3, 00:08:50.180 "num_base_bdevs_discovered": 2, 00:08:50.180 "num_base_bdevs_operational": 3, 00:08:50.180 "base_bdevs_list": [ 00:08:50.180 { 00:08:50.180 "name": "BaseBdev1", 00:08:50.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.180 "is_configured": false, 00:08:50.180 "data_offset": 0, 00:08:50.180 "data_size": 0 00:08:50.180 }, 00:08:50.180 { 00:08:50.181 "name": "BaseBdev2", 00:08:50.181 "uuid": "0615abce-7f89-4dee-ab92-065ae110f3fc", 00:08:50.181 "is_configured": true, 00:08:50.181 "data_offset": 2048, 00:08:50.181 "data_size": 63488 00:08:50.181 }, 00:08:50.181 { 00:08:50.181 "name": "BaseBdev3", 00:08:50.181 "uuid": "a74e7e4a-ced8-4527-abb2-fbbad9ffa938", 00:08:50.181 "is_configured": true, 00:08:50.181 "data_offset": 2048, 00:08:50.181 "data_size": 63488 00:08:50.181 } 00:08:50.181 ] 00:08:50.181 }' 00:08:50.181 13:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.181 13:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.750 [2024-11-20 13:22:32.191727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.750 "name": "Existed_Raid", 00:08:50.750 "uuid": "5b29db10-e123-47e1-afe4-1f17fb186087", 00:08:50.750 "strip_size_kb": 64, 00:08:50.750 "state": "configuring", 00:08:50.750 "raid_level": "concat", 00:08:50.750 "superblock": true, 00:08:50.750 "num_base_bdevs": 3, 00:08:50.750 "num_base_bdevs_discovered": 1, 00:08:50.750 "num_base_bdevs_operational": 3, 00:08:50.750 "base_bdevs_list": [ 00:08:50.750 { 00:08:50.750 "name": "BaseBdev1", 00:08:50.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.750 "is_configured": false, 00:08:50.750 "data_offset": 0, 00:08:50.750 "data_size": 0 00:08:50.750 }, 00:08:50.750 { 00:08:50.750 "name": null, 00:08:50.750 "uuid": "0615abce-7f89-4dee-ab92-065ae110f3fc", 00:08:50.750 "is_configured": false, 00:08:50.750 "data_offset": 0, 00:08:50.750 "data_size": 63488 00:08:50.750 }, 00:08:50.750 { 00:08:50.750 "name": "BaseBdev3", 00:08:50.750 "uuid": "a74e7e4a-ced8-4527-abb2-fbbad9ffa938", 00:08:50.750 "is_configured": true, 00:08:50.750 "data_offset": 2048, 00:08:50.750 "data_size": 63488 00:08:50.750 } 00:08:50.750 ] 00:08:50.750 }' 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.750 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.010 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.010 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.010 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.010 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.010 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.270 [2024-11-20 13:22:32.690066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.270 BaseBdev1 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.270 [ 00:08:51.270 { 00:08:51.270 "name": "BaseBdev1", 00:08:51.270 "aliases": [ 00:08:51.270 "367884a3-3e24-47fa-bfbb-2695143df045" 00:08:51.270 ], 00:08:51.270 "product_name": "Malloc disk", 00:08:51.270 "block_size": 512, 00:08:51.270 "num_blocks": 65536, 00:08:51.270 "uuid": "367884a3-3e24-47fa-bfbb-2695143df045", 00:08:51.270 "assigned_rate_limits": { 00:08:51.270 "rw_ios_per_sec": 0, 00:08:51.270 "rw_mbytes_per_sec": 0, 00:08:51.270 "r_mbytes_per_sec": 0, 00:08:51.270 "w_mbytes_per_sec": 0 00:08:51.270 }, 00:08:51.270 "claimed": true, 00:08:51.270 "claim_type": "exclusive_write", 00:08:51.270 "zoned": false, 00:08:51.270 "supported_io_types": { 00:08:51.270 "read": true, 00:08:51.270 "write": true, 00:08:51.270 "unmap": true, 00:08:51.270 "flush": true, 00:08:51.270 "reset": true, 00:08:51.270 "nvme_admin": false, 00:08:51.270 "nvme_io": false, 00:08:51.270 "nvme_io_md": false, 00:08:51.270 "write_zeroes": true, 00:08:51.270 "zcopy": true, 00:08:51.270 "get_zone_info": false, 00:08:51.270 "zone_management": false, 00:08:51.270 "zone_append": false, 00:08:51.270 "compare": false, 00:08:51.270 "compare_and_write": false, 00:08:51.270 "abort": true, 00:08:51.270 "seek_hole": false, 00:08:51.270 "seek_data": false, 00:08:51.270 "copy": true, 00:08:51.270 "nvme_iov_md": false 00:08:51.270 }, 00:08:51.270 "memory_domains": [ 00:08:51.270 { 00:08:51.270 "dma_device_id": "system", 00:08:51.270 "dma_device_type": 1 00:08:51.270 }, 00:08:51.270 { 00:08:51.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.270 "dma_device_type": 2 00:08:51.270 } 00:08:51.270 ], 00:08:51.270 "driver_specific": {} 00:08:51.270 } 00:08:51.270 ] 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.270 "name": "Existed_Raid", 00:08:51.270 "uuid": "5b29db10-e123-47e1-afe4-1f17fb186087", 00:08:51.270 "strip_size_kb": 64, 00:08:51.270 "state": "configuring", 00:08:51.270 "raid_level": "concat", 00:08:51.270 "superblock": true, 00:08:51.270 "num_base_bdevs": 3, 00:08:51.270 "num_base_bdevs_discovered": 2, 00:08:51.270 "num_base_bdevs_operational": 3, 00:08:51.270 "base_bdevs_list": [ 00:08:51.270 { 00:08:51.270 "name": "BaseBdev1", 00:08:51.270 "uuid": "367884a3-3e24-47fa-bfbb-2695143df045", 00:08:51.270 "is_configured": true, 00:08:51.270 "data_offset": 2048, 00:08:51.270 "data_size": 63488 00:08:51.270 }, 00:08:51.270 { 00:08:51.270 "name": null, 00:08:51.270 "uuid": "0615abce-7f89-4dee-ab92-065ae110f3fc", 00:08:51.270 "is_configured": false, 00:08:51.270 "data_offset": 0, 00:08:51.270 "data_size": 63488 00:08:51.270 }, 00:08:51.270 { 00:08:51.270 "name": "BaseBdev3", 00:08:51.270 "uuid": "a74e7e4a-ced8-4527-abb2-fbbad9ffa938", 00:08:51.270 "is_configured": true, 00:08:51.270 "data_offset": 2048, 00:08:51.270 "data_size": 63488 00:08:51.270 } 00:08:51.270 ] 00:08:51.270 }' 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.270 13:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.530 [2024-11-20 13:22:33.181257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.530 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.531 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.531 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.531 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.531 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.790 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.790 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.790 "name": "Existed_Raid", 00:08:51.790 "uuid": "5b29db10-e123-47e1-afe4-1f17fb186087", 00:08:51.790 "strip_size_kb": 64, 00:08:51.790 "state": "configuring", 00:08:51.790 "raid_level": "concat", 00:08:51.790 "superblock": true, 00:08:51.790 "num_base_bdevs": 3, 00:08:51.790 "num_base_bdevs_discovered": 1, 00:08:51.790 "num_base_bdevs_operational": 3, 00:08:51.790 "base_bdevs_list": [ 00:08:51.790 { 00:08:51.790 "name": "BaseBdev1", 00:08:51.790 "uuid": "367884a3-3e24-47fa-bfbb-2695143df045", 00:08:51.790 "is_configured": true, 00:08:51.790 "data_offset": 2048, 00:08:51.790 "data_size": 63488 00:08:51.790 }, 00:08:51.790 { 00:08:51.790 "name": null, 00:08:51.790 "uuid": "0615abce-7f89-4dee-ab92-065ae110f3fc", 00:08:51.790 "is_configured": false, 00:08:51.790 "data_offset": 0, 00:08:51.790 "data_size": 63488 00:08:51.790 }, 00:08:51.790 { 00:08:51.790 "name": null, 00:08:51.790 "uuid": "a74e7e4a-ced8-4527-abb2-fbbad9ffa938", 00:08:51.790 "is_configured": false, 00:08:51.790 "data_offset": 0, 00:08:51.791 "data_size": 63488 00:08:51.791 } 00:08:51.791 ] 00:08:51.791 }' 00:08:51.791 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.791 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.051 [2024-11-20 13:22:33.680399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.051 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.310 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.310 "name": "Existed_Raid", 00:08:52.310 "uuid": "5b29db10-e123-47e1-afe4-1f17fb186087", 00:08:52.310 "strip_size_kb": 64, 00:08:52.310 "state": "configuring", 00:08:52.310 "raid_level": "concat", 00:08:52.310 "superblock": true, 00:08:52.310 "num_base_bdevs": 3, 00:08:52.310 "num_base_bdevs_discovered": 2, 00:08:52.310 "num_base_bdevs_operational": 3, 00:08:52.310 "base_bdevs_list": [ 00:08:52.311 { 00:08:52.311 "name": "BaseBdev1", 00:08:52.311 "uuid": "367884a3-3e24-47fa-bfbb-2695143df045", 00:08:52.311 "is_configured": true, 00:08:52.311 "data_offset": 2048, 00:08:52.311 "data_size": 63488 00:08:52.311 }, 00:08:52.311 { 00:08:52.311 "name": null, 00:08:52.311 "uuid": "0615abce-7f89-4dee-ab92-065ae110f3fc", 00:08:52.311 "is_configured": false, 00:08:52.311 "data_offset": 0, 00:08:52.311 "data_size": 63488 00:08:52.311 }, 00:08:52.311 { 00:08:52.311 "name": "BaseBdev3", 00:08:52.311 "uuid": "a74e7e4a-ced8-4527-abb2-fbbad9ffa938", 00:08:52.311 "is_configured": true, 00:08:52.311 "data_offset": 2048, 00:08:52.311 "data_size": 63488 00:08:52.311 } 00:08:52.311 ] 00:08:52.311 }' 00:08:52.311 13:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.311 13:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.570 [2024-11-20 13:22:34.123710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.570 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.570 "name": "Existed_Raid", 00:08:52.570 "uuid": "5b29db10-e123-47e1-afe4-1f17fb186087", 00:08:52.570 "strip_size_kb": 64, 00:08:52.570 "state": "configuring", 00:08:52.570 "raid_level": "concat", 00:08:52.570 "superblock": true, 00:08:52.570 "num_base_bdevs": 3, 00:08:52.570 "num_base_bdevs_discovered": 1, 00:08:52.570 "num_base_bdevs_operational": 3, 00:08:52.570 "base_bdevs_list": [ 00:08:52.570 { 00:08:52.570 "name": null, 00:08:52.570 "uuid": "367884a3-3e24-47fa-bfbb-2695143df045", 00:08:52.570 "is_configured": false, 00:08:52.570 "data_offset": 0, 00:08:52.570 "data_size": 63488 00:08:52.570 }, 00:08:52.570 { 00:08:52.570 "name": null, 00:08:52.571 "uuid": "0615abce-7f89-4dee-ab92-065ae110f3fc", 00:08:52.571 "is_configured": false, 00:08:52.571 "data_offset": 0, 00:08:52.571 "data_size": 63488 00:08:52.571 }, 00:08:52.571 { 00:08:52.571 "name": "BaseBdev3", 00:08:52.571 "uuid": "a74e7e4a-ced8-4527-abb2-fbbad9ffa938", 00:08:52.571 "is_configured": true, 00:08:52.571 "data_offset": 2048, 00:08:52.571 "data_size": 63488 00:08:52.571 } 00:08:52.571 ] 00:08:52.571 }' 00:08:52.571 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.571 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.145 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.146 [2024-11-20 13:22:34.641383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.146 "name": "Existed_Raid", 00:08:53.146 "uuid": "5b29db10-e123-47e1-afe4-1f17fb186087", 00:08:53.146 "strip_size_kb": 64, 00:08:53.146 "state": "configuring", 00:08:53.146 "raid_level": "concat", 00:08:53.146 "superblock": true, 00:08:53.146 "num_base_bdevs": 3, 00:08:53.146 "num_base_bdevs_discovered": 2, 00:08:53.146 "num_base_bdevs_operational": 3, 00:08:53.146 "base_bdevs_list": [ 00:08:53.146 { 00:08:53.146 "name": null, 00:08:53.146 "uuid": "367884a3-3e24-47fa-bfbb-2695143df045", 00:08:53.146 "is_configured": false, 00:08:53.146 "data_offset": 0, 00:08:53.146 "data_size": 63488 00:08:53.146 }, 00:08:53.146 { 00:08:53.146 "name": "BaseBdev2", 00:08:53.146 "uuid": "0615abce-7f89-4dee-ab92-065ae110f3fc", 00:08:53.146 "is_configured": true, 00:08:53.146 "data_offset": 2048, 00:08:53.146 "data_size": 63488 00:08:53.146 }, 00:08:53.146 { 00:08:53.146 "name": "BaseBdev3", 00:08:53.146 "uuid": "a74e7e4a-ced8-4527-abb2-fbbad9ffa938", 00:08:53.146 "is_configured": true, 00:08:53.146 "data_offset": 2048, 00:08:53.146 "data_size": 63488 00:08:53.146 } 00:08:53.146 ] 00:08:53.146 }' 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.146 13:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 367884a3-3e24-47fa-bfbb-2695143df045 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.715 [2024-11-20 13:22:35.163558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:53.715 [2024-11-20 13:22:35.163798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:53.715 [2024-11-20 13:22:35.163854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:53.715 [2024-11-20 13:22:35.164145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:53.715 NewBaseBdev 00:08:53.715 [2024-11-20 13:22:35.164302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:53.715 [2024-11-20 13:22:35.164314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:53.715 [2024-11-20 13:22:35.164421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.715 [ 00:08:53.715 { 00:08:53.715 "name": "NewBaseBdev", 00:08:53.715 "aliases": [ 00:08:53.715 "367884a3-3e24-47fa-bfbb-2695143df045" 00:08:53.715 ], 00:08:53.715 "product_name": "Malloc disk", 00:08:53.715 "block_size": 512, 00:08:53.715 "num_blocks": 65536, 00:08:53.715 "uuid": "367884a3-3e24-47fa-bfbb-2695143df045", 00:08:53.715 "assigned_rate_limits": { 00:08:53.715 "rw_ios_per_sec": 0, 00:08:53.715 "rw_mbytes_per_sec": 0, 00:08:53.715 "r_mbytes_per_sec": 0, 00:08:53.715 "w_mbytes_per_sec": 0 00:08:53.715 }, 00:08:53.715 "claimed": true, 00:08:53.715 "claim_type": "exclusive_write", 00:08:53.715 "zoned": false, 00:08:53.715 "supported_io_types": { 00:08:53.715 "read": true, 00:08:53.715 "write": true, 00:08:53.715 "unmap": true, 00:08:53.715 "flush": true, 00:08:53.715 "reset": true, 00:08:53.715 "nvme_admin": false, 00:08:53.715 "nvme_io": false, 00:08:53.715 "nvme_io_md": false, 00:08:53.715 "write_zeroes": true, 00:08:53.715 "zcopy": true, 00:08:53.715 "get_zone_info": false, 00:08:53.715 "zone_management": false, 00:08:53.715 "zone_append": false, 00:08:53.715 "compare": false, 00:08:53.715 "compare_and_write": false, 00:08:53.715 "abort": true, 00:08:53.715 "seek_hole": false, 00:08:53.715 "seek_data": false, 00:08:53.715 "copy": true, 00:08:53.715 "nvme_iov_md": false 00:08:53.715 }, 00:08:53.715 "memory_domains": [ 00:08:53.715 { 00:08:53.715 "dma_device_id": "system", 00:08:53.715 "dma_device_type": 1 00:08:53.715 }, 00:08:53.715 { 00:08:53.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.715 "dma_device_type": 2 00:08:53.715 } 00:08:53.715 ], 00:08:53.715 "driver_specific": {} 00:08:53.715 } 00:08:53.715 ] 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:53.715 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.716 "name": "Existed_Raid", 00:08:53.716 "uuid": "5b29db10-e123-47e1-afe4-1f17fb186087", 00:08:53.716 "strip_size_kb": 64, 00:08:53.716 "state": "online", 00:08:53.716 "raid_level": "concat", 00:08:53.716 "superblock": true, 00:08:53.716 "num_base_bdevs": 3, 00:08:53.716 "num_base_bdevs_discovered": 3, 00:08:53.716 "num_base_bdevs_operational": 3, 00:08:53.716 "base_bdevs_list": [ 00:08:53.716 { 00:08:53.716 "name": "NewBaseBdev", 00:08:53.716 "uuid": "367884a3-3e24-47fa-bfbb-2695143df045", 00:08:53.716 "is_configured": true, 00:08:53.716 "data_offset": 2048, 00:08:53.716 "data_size": 63488 00:08:53.716 }, 00:08:53.716 { 00:08:53.716 "name": "BaseBdev2", 00:08:53.716 "uuid": "0615abce-7f89-4dee-ab92-065ae110f3fc", 00:08:53.716 "is_configured": true, 00:08:53.716 "data_offset": 2048, 00:08:53.716 "data_size": 63488 00:08:53.716 }, 00:08:53.716 { 00:08:53.716 "name": "BaseBdev3", 00:08:53.716 "uuid": "a74e7e4a-ced8-4527-abb2-fbbad9ffa938", 00:08:53.716 "is_configured": true, 00:08:53.716 "data_offset": 2048, 00:08:53.716 "data_size": 63488 00:08:53.716 } 00:08:53.716 ] 00:08:53.716 }' 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.716 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.976 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:53.976 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:53.976 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.976 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.976 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.976 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.976 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:53.976 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.976 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.976 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.976 [2024-11-20 13:22:35.643130] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.235 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.235 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.235 "name": "Existed_Raid", 00:08:54.235 "aliases": [ 00:08:54.235 "5b29db10-e123-47e1-afe4-1f17fb186087" 00:08:54.235 ], 00:08:54.235 "product_name": "Raid Volume", 00:08:54.235 "block_size": 512, 00:08:54.235 "num_blocks": 190464, 00:08:54.235 "uuid": "5b29db10-e123-47e1-afe4-1f17fb186087", 00:08:54.235 "assigned_rate_limits": { 00:08:54.235 "rw_ios_per_sec": 0, 00:08:54.235 "rw_mbytes_per_sec": 0, 00:08:54.235 "r_mbytes_per_sec": 0, 00:08:54.235 "w_mbytes_per_sec": 0 00:08:54.235 }, 00:08:54.235 "claimed": false, 00:08:54.235 "zoned": false, 00:08:54.235 "supported_io_types": { 00:08:54.235 "read": true, 00:08:54.235 "write": true, 00:08:54.235 "unmap": true, 00:08:54.235 "flush": true, 00:08:54.235 "reset": true, 00:08:54.235 "nvme_admin": false, 00:08:54.235 "nvme_io": false, 00:08:54.235 "nvme_io_md": false, 00:08:54.235 "write_zeroes": true, 00:08:54.235 "zcopy": false, 00:08:54.235 "get_zone_info": false, 00:08:54.235 "zone_management": false, 00:08:54.235 "zone_append": false, 00:08:54.235 "compare": false, 00:08:54.235 "compare_and_write": false, 00:08:54.235 "abort": false, 00:08:54.235 "seek_hole": false, 00:08:54.235 "seek_data": false, 00:08:54.235 "copy": false, 00:08:54.235 "nvme_iov_md": false 00:08:54.235 }, 00:08:54.235 "memory_domains": [ 00:08:54.235 { 00:08:54.235 "dma_device_id": "system", 00:08:54.235 "dma_device_type": 1 00:08:54.235 }, 00:08:54.235 { 00:08:54.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.235 "dma_device_type": 2 00:08:54.235 }, 00:08:54.235 { 00:08:54.235 "dma_device_id": "system", 00:08:54.235 "dma_device_type": 1 00:08:54.235 }, 00:08:54.235 { 00:08:54.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.235 "dma_device_type": 2 00:08:54.235 }, 00:08:54.235 { 00:08:54.235 "dma_device_id": "system", 00:08:54.235 "dma_device_type": 1 00:08:54.235 }, 00:08:54.235 { 00:08:54.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.235 "dma_device_type": 2 00:08:54.235 } 00:08:54.236 ], 00:08:54.236 "driver_specific": { 00:08:54.236 "raid": { 00:08:54.236 "uuid": "5b29db10-e123-47e1-afe4-1f17fb186087", 00:08:54.236 "strip_size_kb": 64, 00:08:54.236 "state": "online", 00:08:54.236 "raid_level": "concat", 00:08:54.236 "superblock": true, 00:08:54.236 "num_base_bdevs": 3, 00:08:54.236 "num_base_bdevs_discovered": 3, 00:08:54.236 "num_base_bdevs_operational": 3, 00:08:54.236 "base_bdevs_list": [ 00:08:54.236 { 00:08:54.236 "name": "NewBaseBdev", 00:08:54.236 "uuid": "367884a3-3e24-47fa-bfbb-2695143df045", 00:08:54.236 "is_configured": true, 00:08:54.236 "data_offset": 2048, 00:08:54.236 "data_size": 63488 00:08:54.236 }, 00:08:54.236 { 00:08:54.236 "name": "BaseBdev2", 00:08:54.236 "uuid": "0615abce-7f89-4dee-ab92-065ae110f3fc", 00:08:54.236 "is_configured": true, 00:08:54.236 "data_offset": 2048, 00:08:54.236 "data_size": 63488 00:08:54.236 }, 00:08:54.236 { 00:08:54.236 "name": "BaseBdev3", 00:08:54.236 "uuid": "a74e7e4a-ced8-4527-abb2-fbbad9ffa938", 00:08:54.236 "is_configured": true, 00:08:54.236 "data_offset": 2048, 00:08:54.236 "data_size": 63488 00:08:54.236 } 00:08:54.236 ] 00:08:54.236 } 00:08:54.236 } 00:08:54.236 }' 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:54.236 BaseBdev2 00:08:54.236 BaseBdev3' 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.236 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.495 [2024-11-20 13:22:35.926318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.495 [2024-11-20 13:22:35.926382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.495 [2024-11-20 13:22:35.926477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.495 [2024-11-20 13:22:35.926566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.495 [2024-11-20 13:22:35.926621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77045 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77045 ']' 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 77045 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77045 00:08:54.495 killing process with pid 77045 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77045' 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 77045 00:08:54.495 [2024-11-20 13:22:35.960210] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.495 13:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 77045 00:08:54.495 [2024-11-20 13:22:35.990749] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.755 13:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:54.755 00:08:54.755 real 0m8.896s 00:08:54.755 user 0m15.331s 00:08:54.755 sys 0m1.688s 00:08:54.755 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.755 13:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.755 ************************************ 00:08:54.755 END TEST raid_state_function_test_sb 00:08:54.755 ************************************ 00:08:54.755 13:22:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:54.755 13:22:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:54.755 13:22:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.755 13:22:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.755 ************************************ 00:08:54.755 START TEST raid_superblock_test 00:08:54.755 ************************************ 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:54.755 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77649 00:08:54.756 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:54.756 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77649 00:08:54.756 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 77649 ']' 00:08:54.756 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.756 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.756 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.756 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.756 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.756 [2024-11-20 13:22:36.356159] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:54.756 [2024-11-20 13:22:36.356363] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77649 ] 00:08:55.015 [2024-11-20 13:22:36.492090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.015 [2024-11-20 13:22:36.518275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.015 [2024-11-20 13:22:36.561414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.015 [2024-11-20 13:22:36.561533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.585 malloc1 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.585 [2024-11-20 13:22:37.212384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:55.585 [2024-11-20 13:22:37.212497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.585 [2024-11-20 13:22:37.212534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:55.585 [2024-11-20 13:22:37.212566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.585 [2024-11-20 13:22:37.214713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.585 [2024-11-20 13:22:37.214782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:55.585 pt1 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.585 malloc2 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.585 [2024-11-20 13:22:37.240978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.585 [2024-11-20 13:22:37.241097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.585 [2024-11-20 13:22:37.241129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:55.585 [2024-11-20 13:22:37.241158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.585 [2024-11-20 13:22:37.243230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.585 [2024-11-20 13:22:37.243297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.585 pt2 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.585 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.845 malloc3 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.845 [2024-11-20 13:22:37.273522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:55.845 [2024-11-20 13:22:37.273640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.845 [2024-11-20 13:22:37.273674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:55.845 [2024-11-20 13:22:37.273703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.845 [2024-11-20 13:22:37.275779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.845 [2024-11-20 13:22:37.275855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:55.845 pt3 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.845 [2024-11-20 13:22:37.285580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:55.845 [2024-11-20 13:22:37.287440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.845 [2024-11-20 13:22:37.287554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:55.845 [2024-11-20 13:22:37.287736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:55.845 [2024-11-20 13:22:37.287782] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:55.845 [2024-11-20 13:22:37.288070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:55.845 [2024-11-20 13:22:37.288238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:55.845 [2024-11-20 13:22:37.288282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:55.845 [2024-11-20 13:22:37.288426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.845 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.846 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.846 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.846 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.846 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.846 "name": "raid_bdev1", 00:08:55.846 "uuid": "669706d4-6c99-4d82-8fe0-6794b77db205", 00:08:55.846 "strip_size_kb": 64, 00:08:55.846 "state": "online", 00:08:55.846 "raid_level": "concat", 00:08:55.846 "superblock": true, 00:08:55.846 "num_base_bdevs": 3, 00:08:55.846 "num_base_bdevs_discovered": 3, 00:08:55.846 "num_base_bdevs_operational": 3, 00:08:55.846 "base_bdevs_list": [ 00:08:55.846 { 00:08:55.846 "name": "pt1", 00:08:55.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.846 "is_configured": true, 00:08:55.846 "data_offset": 2048, 00:08:55.846 "data_size": 63488 00:08:55.846 }, 00:08:55.846 { 00:08:55.846 "name": "pt2", 00:08:55.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.846 "is_configured": true, 00:08:55.846 "data_offset": 2048, 00:08:55.846 "data_size": 63488 00:08:55.846 }, 00:08:55.846 { 00:08:55.846 "name": "pt3", 00:08:55.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.846 "is_configured": true, 00:08:55.846 "data_offset": 2048, 00:08:55.846 "data_size": 63488 00:08:55.846 } 00:08:55.846 ] 00:08:55.846 }' 00:08:55.846 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.846 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.105 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:56.105 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:56.105 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.105 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.105 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.105 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.105 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.105 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.105 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.105 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.105 [2024-11-20 13:22:37.745121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.105 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.365 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.365 "name": "raid_bdev1", 00:08:56.365 "aliases": [ 00:08:56.365 "669706d4-6c99-4d82-8fe0-6794b77db205" 00:08:56.365 ], 00:08:56.365 "product_name": "Raid Volume", 00:08:56.365 "block_size": 512, 00:08:56.365 "num_blocks": 190464, 00:08:56.365 "uuid": "669706d4-6c99-4d82-8fe0-6794b77db205", 00:08:56.365 "assigned_rate_limits": { 00:08:56.365 "rw_ios_per_sec": 0, 00:08:56.365 "rw_mbytes_per_sec": 0, 00:08:56.365 "r_mbytes_per_sec": 0, 00:08:56.365 "w_mbytes_per_sec": 0 00:08:56.365 }, 00:08:56.365 "claimed": false, 00:08:56.365 "zoned": false, 00:08:56.365 "supported_io_types": { 00:08:56.365 "read": true, 00:08:56.365 "write": true, 00:08:56.365 "unmap": true, 00:08:56.365 "flush": true, 00:08:56.365 "reset": true, 00:08:56.365 "nvme_admin": false, 00:08:56.365 "nvme_io": false, 00:08:56.365 "nvme_io_md": false, 00:08:56.365 "write_zeroes": true, 00:08:56.365 "zcopy": false, 00:08:56.365 "get_zone_info": false, 00:08:56.365 "zone_management": false, 00:08:56.365 "zone_append": false, 00:08:56.365 "compare": false, 00:08:56.365 "compare_and_write": false, 00:08:56.365 "abort": false, 00:08:56.365 "seek_hole": false, 00:08:56.365 "seek_data": false, 00:08:56.365 "copy": false, 00:08:56.366 "nvme_iov_md": false 00:08:56.366 }, 00:08:56.366 "memory_domains": [ 00:08:56.366 { 00:08:56.366 "dma_device_id": "system", 00:08:56.366 "dma_device_type": 1 00:08:56.366 }, 00:08:56.366 { 00:08:56.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.366 "dma_device_type": 2 00:08:56.366 }, 00:08:56.366 { 00:08:56.366 "dma_device_id": "system", 00:08:56.366 "dma_device_type": 1 00:08:56.366 }, 00:08:56.366 { 00:08:56.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.366 "dma_device_type": 2 00:08:56.366 }, 00:08:56.366 { 00:08:56.366 "dma_device_id": "system", 00:08:56.366 "dma_device_type": 1 00:08:56.366 }, 00:08:56.366 { 00:08:56.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.366 "dma_device_type": 2 00:08:56.366 } 00:08:56.366 ], 00:08:56.366 "driver_specific": { 00:08:56.366 "raid": { 00:08:56.366 "uuid": "669706d4-6c99-4d82-8fe0-6794b77db205", 00:08:56.366 "strip_size_kb": 64, 00:08:56.366 "state": "online", 00:08:56.366 "raid_level": "concat", 00:08:56.366 "superblock": true, 00:08:56.366 "num_base_bdevs": 3, 00:08:56.366 "num_base_bdevs_discovered": 3, 00:08:56.366 "num_base_bdevs_operational": 3, 00:08:56.366 "base_bdevs_list": [ 00:08:56.366 { 00:08:56.366 "name": "pt1", 00:08:56.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.366 "is_configured": true, 00:08:56.366 "data_offset": 2048, 00:08:56.366 "data_size": 63488 00:08:56.366 }, 00:08:56.366 { 00:08:56.366 "name": "pt2", 00:08:56.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.366 "is_configured": true, 00:08:56.366 "data_offset": 2048, 00:08:56.366 "data_size": 63488 00:08:56.366 }, 00:08:56.366 { 00:08:56.366 "name": "pt3", 00:08:56.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.366 "is_configured": true, 00:08:56.366 "data_offset": 2048, 00:08:56.366 "data_size": 63488 00:08:56.366 } 00:08:56.366 ] 00:08:56.366 } 00:08:56.366 } 00:08:56.366 }' 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:56.366 pt2 00:08:56.366 pt3' 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.366 13:22:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.366 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.627 [2024-11-20 13:22:38.044505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=669706d4-6c99-4d82-8fe0-6794b77db205 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 669706d4-6c99-4d82-8fe0-6794b77db205 ']' 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.627 [2024-11-20 13:22:38.092155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.627 [2024-11-20 13:22:38.092181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.627 [2024-11-20 13:22:38.092268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.627 [2024-11-20 13:22:38.092334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.627 [2024-11-20 13:22:38.092351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.627 [2024-11-20 13:22:38.231959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:56.627 [2024-11-20 13:22:38.233883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:56.627 [2024-11-20 13:22:38.233970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:56.627 [2024-11-20 13:22:38.234048] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:56.627 [2024-11-20 13:22:38.234135] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:56.627 [2024-11-20 13:22:38.234224] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:56.627 [2024-11-20 13:22:38.234274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.627 [2024-11-20 13:22:38.234373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:56.627 request: 00:08:56.627 { 00:08:56.627 "name": "raid_bdev1", 00:08:56.627 "raid_level": "concat", 00:08:56.627 "base_bdevs": [ 00:08:56.627 "malloc1", 00:08:56.627 "malloc2", 00:08:56.627 "malloc3" 00:08:56.627 ], 00:08:56.627 "strip_size_kb": 64, 00:08:56.627 "superblock": false, 00:08:56.627 "method": "bdev_raid_create", 00:08:56.627 "req_id": 1 00:08:56.627 } 00:08:56.627 Got JSON-RPC error response 00:08:56.627 response: 00:08:56.627 { 00:08:56.627 "code": -17, 00:08:56.627 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:56.627 } 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:56.627 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.628 [2024-11-20 13:22:38.283834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:56.628 [2024-11-20 13:22:38.283937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.628 [2024-11-20 13:22:38.283969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:56.628 [2024-11-20 13:22:38.284014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.628 [2024-11-20 13:22:38.286198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.628 [2024-11-20 13:22:38.286268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:56.628 [2024-11-20 13:22:38.286354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:56.628 [2024-11-20 13:22:38.286421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:56.628 pt1 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.628 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.888 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.888 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.888 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.888 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.888 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.888 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.888 "name": "raid_bdev1", 00:08:56.888 "uuid": "669706d4-6c99-4d82-8fe0-6794b77db205", 00:08:56.888 "strip_size_kb": 64, 00:08:56.888 "state": "configuring", 00:08:56.888 "raid_level": "concat", 00:08:56.888 "superblock": true, 00:08:56.888 "num_base_bdevs": 3, 00:08:56.888 "num_base_bdevs_discovered": 1, 00:08:56.888 "num_base_bdevs_operational": 3, 00:08:56.888 "base_bdevs_list": [ 00:08:56.888 { 00:08:56.888 "name": "pt1", 00:08:56.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.888 "is_configured": true, 00:08:56.888 "data_offset": 2048, 00:08:56.888 "data_size": 63488 00:08:56.888 }, 00:08:56.888 { 00:08:56.888 "name": null, 00:08:56.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.888 "is_configured": false, 00:08:56.888 "data_offset": 2048, 00:08:56.888 "data_size": 63488 00:08:56.888 }, 00:08:56.888 { 00:08:56.888 "name": null, 00:08:56.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.888 "is_configured": false, 00:08:56.888 "data_offset": 2048, 00:08:56.888 "data_size": 63488 00:08:56.888 } 00:08:56.888 ] 00:08:56.888 }' 00:08:56.888 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.888 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.148 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:57.148 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:57.148 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.148 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.148 [2024-11-20 13:22:38.739092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:57.148 [2024-11-20 13:22:38.739212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.149 [2024-11-20 13:22:38.739253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:57.149 [2024-11-20 13:22:38.739284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.149 [2024-11-20 13:22:38.739741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.149 [2024-11-20 13:22:38.739802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:57.149 [2024-11-20 13:22:38.739909] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:57.149 [2024-11-20 13:22:38.739965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:57.149 pt2 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.149 [2024-11-20 13:22:38.751067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.149 "name": "raid_bdev1", 00:08:57.149 "uuid": "669706d4-6c99-4d82-8fe0-6794b77db205", 00:08:57.149 "strip_size_kb": 64, 00:08:57.149 "state": "configuring", 00:08:57.149 "raid_level": "concat", 00:08:57.149 "superblock": true, 00:08:57.149 "num_base_bdevs": 3, 00:08:57.149 "num_base_bdevs_discovered": 1, 00:08:57.149 "num_base_bdevs_operational": 3, 00:08:57.149 "base_bdevs_list": [ 00:08:57.149 { 00:08:57.149 "name": "pt1", 00:08:57.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.149 "is_configured": true, 00:08:57.149 "data_offset": 2048, 00:08:57.149 "data_size": 63488 00:08:57.149 }, 00:08:57.149 { 00:08:57.149 "name": null, 00:08:57.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.149 "is_configured": false, 00:08:57.149 "data_offset": 0, 00:08:57.149 "data_size": 63488 00:08:57.149 }, 00:08:57.149 { 00:08:57.149 "name": null, 00:08:57.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.149 "is_configured": false, 00:08:57.149 "data_offset": 2048, 00:08:57.149 "data_size": 63488 00:08:57.149 } 00:08:57.149 ] 00:08:57.149 }' 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.149 13:22:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.728 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:57.728 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:57.728 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:57.728 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.728 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.728 [2024-11-20 13:22:39.226207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:57.728 [2024-11-20 13:22:39.226309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.728 [2024-11-20 13:22:39.226345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:57.728 [2024-11-20 13:22:39.226371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.728 [2024-11-20 13:22:39.226821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.728 [2024-11-20 13:22:39.226878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:57.728 [2024-11-20 13:22:39.226983] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:57.728 [2024-11-20 13:22:39.227043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:57.728 pt2 00:08:57.728 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.728 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:57.728 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:57.728 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:57.728 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.728 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.728 [2024-11-20 13:22:39.238171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:57.729 [2024-11-20 13:22:39.238266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.729 [2024-11-20 13:22:39.238300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:57.729 [2024-11-20 13:22:39.238325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.729 [2024-11-20 13:22:39.238693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.729 [2024-11-20 13:22:39.238748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:57.729 [2024-11-20 13:22:39.238832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:57.729 [2024-11-20 13:22:39.238876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:57.729 [2024-11-20 13:22:39.239020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:57.729 [2024-11-20 13:22:39.239061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:57.729 [2024-11-20 13:22:39.239306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:57.729 [2024-11-20 13:22:39.239445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:57.729 [2024-11-20 13:22:39.239484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:57.729 [2024-11-20 13:22:39.239627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.729 pt3 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.729 "name": "raid_bdev1", 00:08:57.729 "uuid": "669706d4-6c99-4d82-8fe0-6794b77db205", 00:08:57.729 "strip_size_kb": 64, 00:08:57.729 "state": "online", 00:08:57.729 "raid_level": "concat", 00:08:57.729 "superblock": true, 00:08:57.729 "num_base_bdevs": 3, 00:08:57.729 "num_base_bdevs_discovered": 3, 00:08:57.729 "num_base_bdevs_operational": 3, 00:08:57.729 "base_bdevs_list": [ 00:08:57.729 { 00:08:57.729 "name": "pt1", 00:08:57.729 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.729 "is_configured": true, 00:08:57.729 "data_offset": 2048, 00:08:57.729 "data_size": 63488 00:08:57.729 }, 00:08:57.729 { 00:08:57.729 "name": "pt2", 00:08:57.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.729 "is_configured": true, 00:08:57.729 "data_offset": 2048, 00:08:57.729 "data_size": 63488 00:08:57.729 }, 00:08:57.729 { 00:08:57.729 "name": "pt3", 00:08:57.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.729 "is_configured": true, 00:08:57.729 "data_offset": 2048, 00:08:57.729 "data_size": 63488 00:08:57.729 } 00:08:57.729 ] 00:08:57.729 }' 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.729 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.299 [2024-11-20 13:22:39.745658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.299 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.299 "name": "raid_bdev1", 00:08:58.299 "aliases": [ 00:08:58.299 "669706d4-6c99-4d82-8fe0-6794b77db205" 00:08:58.299 ], 00:08:58.299 "product_name": "Raid Volume", 00:08:58.299 "block_size": 512, 00:08:58.299 "num_blocks": 190464, 00:08:58.299 "uuid": "669706d4-6c99-4d82-8fe0-6794b77db205", 00:08:58.299 "assigned_rate_limits": { 00:08:58.299 "rw_ios_per_sec": 0, 00:08:58.299 "rw_mbytes_per_sec": 0, 00:08:58.299 "r_mbytes_per_sec": 0, 00:08:58.300 "w_mbytes_per_sec": 0 00:08:58.300 }, 00:08:58.300 "claimed": false, 00:08:58.300 "zoned": false, 00:08:58.300 "supported_io_types": { 00:08:58.300 "read": true, 00:08:58.300 "write": true, 00:08:58.300 "unmap": true, 00:08:58.300 "flush": true, 00:08:58.300 "reset": true, 00:08:58.300 "nvme_admin": false, 00:08:58.300 "nvme_io": false, 00:08:58.300 "nvme_io_md": false, 00:08:58.300 "write_zeroes": true, 00:08:58.300 "zcopy": false, 00:08:58.300 "get_zone_info": false, 00:08:58.300 "zone_management": false, 00:08:58.300 "zone_append": false, 00:08:58.300 "compare": false, 00:08:58.300 "compare_and_write": false, 00:08:58.300 "abort": false, 00:08:58.300 "seek_hole": false, 00:08:58.300 "seek_data": false, 00:08:58.300 "copy": false, 00:08:58.300 "nvme_iov_md": false 00:08:58.300 }, 00:08:58.300 "memory_domains": [ 00:08:58.300 { 00:08:58.300 "dma_device_id": "system", 00:08:58.300 "dma_device_type": 1 00:08:58.300 }, 00:08:58.300 { 00:08:58.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.300 "dma_device_type": 2 00:08:58.300 }, 00:08:58.300 { 00:08:58.300 "dma_device_id": "system", 00:08:58.300 "dma_device_type": 1 00:08:58.300 }, 00:08:58.300 { 00:08:58.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.300 "dma_device_type": 2 00:08:58.300 }, 00:08:58.300 { 00:08:58.300 "dma_device_id": "system", 00:08:58.300 "dma_device_type": 1 00:08:58.300 }, 00:08:58.300 { 00:08:58.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.300 "dma_device_type": 2 00:08:58.300 } 00:08:58.300 ], 00:08:58.300 "driver_specific": { 00:08:58.300 "raid": { 00:08:58.300 "uuid": "669706d4-6c99-4d82-8fe0-6794b77db205", 00:08:58.300 "strip_size_kb": 64, 00:08:58.300 "state": "online", 00:08:58.300 "raid_level": "concat", 00:08:58.300 "superblock": true, 00:08:58.300 "num_base_bdevs": 3, 00:08:58.300 "num_base_bdevs_discovered": 3, 00:08:58.300 "num_base_bdevs_operational": 3, 00:08:58.300 "base_bdevs_list": [ 00:08:58.300 { 00:08:58.300 "name": "pt1", 00:08:58.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.300 "is_configured": true, 00:08:58.300 "data_offset": 2048, 00:08:58.300 "data_size": 63488 00:08:58.300 }, 00:08:58.300 { 00:08:58.300 "name": "pt2", 00:08:58.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.300 "is_configured": true, 00:08:58.300 "data_offset": 2048, 00:08:58.300 "data_size": 63488 00:08:58.300 }, 00:08:58.300 { 00:08:58.300 "name": "pt3", 00:08:58.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.300 "is_configured": true, 00:08:58.300 "data_offset": 2048, 00:08:58.300 "data_size": 63488 00:08:58.300 } 00:08:58.300 ] 00:08:58.300 } 00:08:58.300 } 00:08:58.300 }' 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:58.300 pt2 00:08:58.300 pt3' 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.300 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.560 13:22:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.560 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.560 13:22:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.560 [2024-11-20 13:22:40.009166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 669706d4-6c99-4d82-8fe0-6794b77db205 '!=' 669706d4-6c99-4d82-8fe0-6794b77db205 ']' 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77649 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 77649 ']' 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 77649 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77649 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77649' 00:08:58.560 killing process with pid 77649 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 77649 00:08:58.560 [2024-11-20 13:22:40.093629] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.560 [2024-11-20 13:22:40.093759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.560 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 77649 00:08:58.560 [2024-11-20 13:22:40.093849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.560 [2024-11-20 13:22:40.093859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:58.560 [2024-11-20 13:22:40.126715] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.820 13:22:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:58.820 00:08:58.820 real 0m4.068s 00:08:58.820 user 0m6.476s 00:08:58.820 sys 0m0.850s 00:08:58.820 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.820 13:22:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.820 ************************************ 00:08:58.820 END TEST raid_superblock_test 00:08:58.820 ************************************ 00:08:58.820 13:22:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:58.820 13:22:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:58.820 13:22:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.820 13:22:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.820 ************************************ 00:08:58.820 START TEST raid_read_error_test 00:08:58.820 ************************************ 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:58.820 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.I3wovhMCN4 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77891 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77891 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 77891 ']' 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.821 13:22:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.080 [2024-11-20 13:22:40.509118] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:59.080 [2024-11-20 13:22:40.509323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77891 ] 00:08:59.080 [2024-11-20 13:22:40.645243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.080 [2024-11-20 13:22:40.670567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.080 [2024-11-20 13:22:40.714408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.080 [2024-11-20 13:22:40.714514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.022 BaseBdev1_malloc 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.022 true 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.022 [2024-11-20 13:22:41.377312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:00.022 [2024-11-20 13:22:41.377407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.022 [2024-11-20 13:22:41.377462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:00.022 [2024-11-20 13:22:41.377489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.022 [2024-11-20 13:22:41.379649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.022 [2024-11-20 13:22:41.379724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:00.022 BaseBdev1 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.022 BaseBdev2_malloc 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.022 true 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.022 [2024-11-20 13:22:41.418001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:00.022 [2024-11-20 13:22:41.418047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.022 [2024-11-20 13:22:41.418079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:00.022 [2024-11-20 13:22:41.418095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.022 [2024-11-20 13:22:41.420199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.022 [2024-11-20 13:22:41.420241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:00.022 BaseBdev2 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.022 BaseBdev3_malloc 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.022 true 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.022 [2024-11-20 13:22:41.458715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:00.022 [2024-11-20 13:22:41.458802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.022 [2024-11-20 13:22:41.458840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:00.022 [2024-11-20 13:22:41.458849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.022 [2024-11-20 13:22:41.460933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.022 [2024-11-20 13:22:41.460971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:00.022 BaseBdev3 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.022 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.022 [2024-11-20 13:22:41.470769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.022 [2024-11-20 13:22:41.472657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.022 [2024-11-20 13:22:41.472772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.022 [2024-11-20 13:22:41.473001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:00.022 [2024-11-20 13:22:41.473054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.022 [2024-11-20 13:22:41.473350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:00.022 [2024-11-20 13:22:41.473523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:00.022 [2024-11-20 13:22:41.473565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:00.022 [2024-11-20 13:22:41.473739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.023 "name": "raid_bdev1", 00:09:00.023 "uuid": "0fecddff-e7ff-423f-8b44-2fd21113adfc", 00:09:00.023 "strip_size_kb": 64, 00:09:00.023 "state": "online", 00:09:00.023 "raid_level": "concat", 00:09:00.023 "superblock": true, 00:09:00.023 "num_base_bdevs": 3, 00:09:00.023 "num_base_bdevs_discovered": 3, 00:09:00.023 "num_base_bdevs_operational": 3, 00:09:00.023 "base_bdevs_list": [ 00:09:00.023 { 00:09:00.023 "name": "BaseBdev1", 00:09:00.023 "uuid": "105e5d93-cd8f-5fca-b5b0-fc7e9a6f516b", 00:09:00.023 "is_configured": true, 00:09:00.023 "data_offset": 2048, 00:09:00.023 "data_size": 63488 00:09:00.023 }, 00:09:00.023 { 00:09:00.023 "name": "BaseBdev2", 00:09:00.023 "uuid": "fa74356c-f821-5786-a545-107c2972ff36", 00:09:00.023 "is_configured": true, 00:09:00.023 "data_offset": 2048, 00:09:00.023 "data_size": 63488 00:09:00.023 }, 00:09:00.023 { 00:09:00.023 "name": "BaseBdev3", 00:09:00.023 "uuid": "3a38e99e-2f80-5c17-b86b-a3b453132d85", 00:09:00.023 "is_configured": true, 00:09:00.023 "data_offset": 2048, 00:09:00.023 "data_size": 63488 00:09:00.023 } 00:09:00.023 ] 00:09:00.023 }' 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.023 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.283 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:00.283 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:00.543 [2024-11-20 13:22:42.006279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.483 "name": "raid_bdev1", 00:09:01.483 "uuid": "0fecddff-e7ff-423f-8b44-2fd21113adfc", 00:09:01.483 "strip_size_kb": 64, 00:09:01.483 "state": "online", 00:09:01.483 "raid_level": "concat", 00:09:01.483 "superblock": true, 00:09:01.483 "num_base_bdevs": 3, 00:09:01.483 "num_base_bdevs_discovered": 3, 00:09:01.483 "num_base_bdevs_operational": 3, 00:09:01.483 "base_bdevs_list": [ 00:09:01.483 { 00:09:01.483 "name": "BaseBdev1", 00:09:01.483 "uuid": "105e5d93-cd8f-5fca-b5b0-fc7e9a6f516b", 00:09:01.483 "is_configured": true, 00:09:01.483 "data_offset": 2048, 00:09:01.483 "data_size": 63488 00:09:01.483 }, 00:09:01.483 { 00:09:01.483 "name": "BaseBdev2", 00:09:01.483 "uuid": "fa74356c-f821-5786-a545-107c2972ff36", 00:09:01.483 "is_configured": true, 00:09:01.483 "data_offset": 2048, 00:09:01.483 "data_size": 63488 00:09:01.483 }, 00:09:01.483 { 00:09:01.483 "name": "BaseBdev3", 00:09:01.483 "uuid": "3a38e99e-2f80-5c17-b86b-a3b453132d85", 00:09:01.483 "is_configured": true, 00:09:01.483 "data_offset": 2048, 00:09:01.483 "data_size": 63488 00:09:01.483 } 00:09:01.483 ] 00:09:01.483 }' 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.483 13:22:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.744 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:01.744 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.744 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.744 [2024-11-20 13:22:43.402035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.744 [2024-11-20 13:22:43.402125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.744 [2024-11-20 13:22:43.404795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.744 [2024-11-20 13:22:43.404888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.744 [2024-11-20 13:22:43.404942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.744 [2024-11-20 13:22:43.404985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:01.744 { 00:09:01.744 "results": [ 00:09:01.744 { 00:09:01.744 "job": "raid_bdev1", 00:09:01.744 "core_mask": "0x1", 00:09:01.744 "workload": "randrw", 00:09:01.744 "percentage": 50, 00:09:01.744 "status": "finished", 00:09:01.744 "queue_depth": 1, 00:09:01.744 "io_size": 131072, 00:09:01.744 "runtime": 1.396678, 00:09:01.744 "iops": 16766.928382920043, 00:09:01.744 "mibps": 2095.8660478650054, 00:09:01.744 "io_failed": 1, 00:09:01.744 "io_timeout": 0, 00:09:01.744 "avg_latency_us": 82.59120204529185, 00:09:01.744 "min_latency_us": 24.929257641921396, 00:09:01.744 "max_latency_us": 1402.2986899563318 00:09:01.744 } 00:09:01.744 ], 00:09:01.744 "core_count": 1 00:09:01.744 } 00:09:01.744 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.744 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77891 00:09:01.744 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 77891 ']' 00:09:01.744 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 77891 00:09:02.003 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:02.003 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.003 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77891 00:09:02.003 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.003 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.003 killing process with pid 77891 00:09:02.003 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77891' 00:09:02.003 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 77891 00:09:02.003 [2024-11-20 13:22:43.445593] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.003 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 77891 00:09:02.003 [2024-11-20 13:22:43.471810] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.263 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.I3wovhMCN4 00:09:02.263 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:02.263 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:02.263 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:02.263 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:02.263 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.263 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:02.263 ************************************ 00:09:02.263 END TEST raid_read_error_test 00:09:02.263 ************************************ 00:09:02.263 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:02.263 00:09:02.263 real 0m3.275s 00:09:02.263 user 0m4.215s 00:09:02.263 sys 0m0.495s 00:09:02.263 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.263 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.263 13:22:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:02.263 13:22:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.263 13:22:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.263 13:22:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.263 ************************************ 00:09:02.263 START TEST raid_write_error_test 00:09:02.263 ************************************ 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GhlPKatcI8 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78020 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78020 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 78020 ']' 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.263 13:22:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.263 [2024-11-20 13:22:43.862724] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:02.263 [2024-11-20 13:22:43.862926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78020 ] 00:09:02.523 [2024-11-20 13:22:44.016402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.523 [2024-11-20 13:22:44.045722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.523 [2024-11-20 13:22:44.089740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.523 [2024-11-20 13:22:44.089858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.093 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.093 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:03.093 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.093 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:03.093 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.093 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.094 BaseBdev1_malloc 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.094 true 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.094 [2024-11-20 13:22:44.736539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:03.094 [2024-11-20 13:22:44.736656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.094 [2024-11-20 13:22:44.736695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:03.094 [2024-11-20 13:22:44.736724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.094 [2024-11-20 13:22:44.738887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.094 [2024-11-20 13:22:44.738953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:03.094 BaseBdev1 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.094 BaseBdev2_malloc 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.094 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 true 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 [2024-11-20 13:22:44.777482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:03.354 [2024-11-20 13:22:44.777595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.354 [2024-11-20 13:22:44.777633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:03.354 [2024-11-20 13:22:44.777674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.354 [2024-11-20 13:22:44.780051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.354 [2024-11-20 13:22:44.780132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:03.354 BaseBdev2 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 BaseBdev3_malloc 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 true 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 [2024-11-20 13:22:44.818357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:03.354 [2024-11-20 13:22:44.818448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.354 [2024-11-20 13:22:44.818502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:03.354 [2024-11-20 13:22:44.818531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.354 [2024-11-20 13:22:44.820699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.354 [2024-11-20 13:22:44.820784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:03.354 BaseBdev3 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 [2024-11-20 13:22:44.830418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.354 [2024-11-20 13:22:44.832286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.354 [2024-11-20 13:22:44.832360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.354 [2024-11-20 13:22:44.832535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:03.354 [2024-11-20 13:22:44.832550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:03.354 [2024-11-20 13:22:44.832824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:03.354 [2024-11-20 13:22:44.832966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:03.354 [2024-11-20 13:22:44.832979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:03.354 [2024-11-20 13:22:44.833117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.354 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.354 "name": "raid_bdev1", 00:09:03.354 "uuid": "0eaad5f0-c808-4fb1-81b9-8cbc63d83821", 00:09:03.354 "strip_size_kb": 64, 00:09:03.354 "state": "online", 00:09:03.354 "raid_level": "concat", 00:09:03.354 "superblock": true, 00:09:03.354 "num_base_bdevs": 3, 00:09:03.354 "num_base_bdevs_discovered": 3, 00:09:03.354 "num_base_bdevs_operational": 3, 00:09:03.354 "base_bdevs_list": [ 00:09:03.354 { 00:09:03.354 "name": "BaseBdev1", 00:09:03.355 "uuid": "98e3219c-2352-5abb-b32b-bc4eef22b46e", 00:09:03.355 "is_configured": true, 00:09:03.355 "data_offset": 2048, 00:09:03.355 "data_size": 63488 00:09:03.355 }, 00:09:03.355 { 00:09:03.355 "name": "BaseBdev2", 00:09:03.355 "uuid": "a728cc3b-b776-5067-b00e-e1680820e1ce", 00:09:03.355 "is_configured": true, 00:09:03.355 "data_offset": 2048, 00:09:03.355 "data_size": 63488 00:09:03.355 }, 00:09:03.355 { 00:09:03.355 "name": "BaseBdev3", 00:09:03.355 "uuid": "8c963528-1b0a-5175-9c22-7100561e6eb6", 00:09:03.355 "is_configured": true, 00:09:03.355 "data_offset": 2048, 00:09:03.355 "data_size": 63488 00:09:03.355 } 00:09:03.355 ] 00:09:03.355 }' 00:09:03.355 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.355 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.614 13:22:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:03.614 13:22:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:03.874 [2024-11-20 13:22:45.318062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.815 "name": "raid_bdev1", 00:09:04.815 "uuid": "0eaad5f0-c808-4fb1-81b9-8cbc63d83821", 00:09:04.815 "strip_size_kb": 64, 00:09:04.815 "state": "online", 00:09:04.815 "raid_level": "concat", 00:09:04.815 "superblock": true, 00:09:04.815 "num_base_bdevs": 3, 00:09:04.815 "num_base_bdevs_discovered": 3, 00:09:04.815 "num_base_bdevs_operational": 3, 00:09:04.815 "base_bdevs_list": [ 00:09:04.815 { 00:09:04.815 "name": "BaseBdev1", 00:09:04.815 "uuid": "98e3219c-2352-5abb-b32b-bc4eef22b46e", 00:09:04.815 "is_configured": true, 00:09:04.815 "data_offset": 2048, 00:09:04.815 "data_size": 63488 00:09:04.815 }, 00:09:04.815 { 00:09:04.815 "name": "BaseBdev2", 00:09:04.815 "uuid": "a728cc3b-b776-5067-b00e-e1680820e1ce", 00:09:04.815 "is_configured": true, 00:09:04.815 "data_offset": 2048, 00:09:04.815 "data_size": 63488 00:09:04.815 }, 00:09:04.815 { 00:09:04.815 "name": "BaseBdev3", 00:09:04.815 "uuid": "8c963528-1b0a-5175-9c22-7100561e6eb6", 00:09:04.815 "is_configured": true, 00:09:04.815 "data_offset": 2048, 00:09:04.815 "data_size": 63488 00:09:04.815 } 00:09:04.815 ] 00:09:04.815 }' 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.815 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.074 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.074 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.074 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.074 [2024-11-20 13:22:46.738421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.074 [2024-11-20 13:22:46.738509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.075 [2024-11-20 13:22:46.741392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.334 { 00:09:05.334 "results": [ 00:09:05.334 { 00:09:05.334 "job": "raid_bdev1", 00:09:05.334 "core_mask": "0x1", 00:09:05.334 "workload": "randrw", 00:09:05.334 "percentage": 50, 00:09:05.334 "status": "finished", 00:09:05.334 "queue_depth": 1, 00:09:05.334 "io_size": 131072, 00:09:05.334 "runtime": 1.421109, 00:09:05.334 "iops": 16672.19052162783, 00:09:05.334 "mibps": 2084.0238152034785, 00:09:05.334 "io_failed": 1, 00:09:05.334 "io_timeout": 0, 00:09:05.334 "avg_latency_us": 83.0435316662999, 00:09:05.334 "min_latency_us": 25.041048034934498, 00:09:05.334 "max_latency_us": 1352.216593886463 00:09:05.334 } 00:09:05.334 ], 00:09:05.334 "core_count": 1 00:09:05.334 } 00:09:05.334 [2024-11-20 13:22:46.741482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.334 [2024-11-20 13:22:46.741524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.334 [2024-11-20 13:22:46.741536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78020 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 78020 ']' 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 78020 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78020 00:09:05.334 killing process with pid 78020 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78020' 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 78020 00:09:05.334 [2024-11-20 13:22:46.789554] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.334 13:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 78020 00:09:05.334 [2024-11-20 13:22:46.815745] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.594 13:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:05.594 13:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GhlPKatcI8 00:09:05.594 13:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:05.594 13:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:05.594 13:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:05.594 13:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.595 13:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.595 13:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:05.595 00:09:05.595 real 0m3.268s 00:09:05.595 user 0m4.166s 00:09:05.595 sys 0m0.530s 00:09:05.595 13:22:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.595 13:22:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.595 ************************************ 00:09:05.595 END TEST raid_write_error_test 00:09:05.595 ************************************ 00:09:05.595 13:22:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:05.595 13:22:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:05.595 13:22:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:05.595 13:22:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.595 13:22:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.595 ************************************ 00:09:05.595 START TEST raid_state_function_test 00:09:05.595 ************************************ 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78153 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78153' 00:09:05.595 Process raid pid: 78153 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78153 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 78153 ']' 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.595 13:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.595 [2024-11-20 13:22:47.187958] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:05.595 [2024-11-20 13:22:47.188153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.854 [2024-11-20 13:22:47.343083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.854 [2024-11-20 13:22:47.368504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.855 [2024-11-20 13:22:47.413293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.855 [2024-11-20 13:22:47.413413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.423 [2024-11-20 13:22:48.011413] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.423 [2024-11-20 13:22:48.011479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.423 [2024-11-20 13:22:48.011489] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.423 [2024-11-20 13:22:48.011499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.423 [2024-11-20 13:22:48.011505] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.423 [2024-11-20 13:22:48.011516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.423 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.423 "name": "Existed_Raid", 00:09:06.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.423 "strip_size_kb": 0, 00:09:06.423 "state": "configuring", 00:09:06.423 "raid_level": "raid1", 00:09:06.423 "superblock": false, 00:09:06.423 "num_base_bdevs": 3, 00:09:06.423 "num_base_bdevs_discovered": 0, 00:09:06.423 "num_base_bdevs_operational": 3, 00:09:06.423 "base_bdevs_list": [ 00:09:06.423 { 00:09:06.423 "name": "BaseBdev1", 00:09:06.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.423 "is_configured": false, 00:09:06.423 "data_offset": 0, 00:09:06.423 "data_size": 0 00:09:06.423 }, 00:09:06.423 { 00:09:06.423 "name": "BaseBdev2", 00:09:06.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.423 "is_configured": false, 00:09:06.423 "data_offset": 0, 00:09:06.423 "data_size": 0 00:09:06.423 }, 00:09:06.423 { 00:09:06.423 "name": "BaseBdev3", 00:09:06.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.424 "is_configured": false, 00:09:06.424 "data_offset": 0, 00:09:06.424 "data_size": 0 00:09:06.424 } 00:09:06.424 ] 00:09:06.424 }' 00:09:06.424 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.424 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.021 [2024-11-20 13:22:48.506505] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.021 [2024-11-20 13:22:48.506589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.021 [2024-11-20 13:22:48.518492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.021 [2024-11-20 13:22:48.518573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.021 [2024-11-20 13:22:48.518600] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.021 [2024-11-20 13:22:48.518621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.021 [2024-11-20 13:22:48.518639] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.021 [2024-11-20 13:22:48.518660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.021 [2024-11-20 13:22:48.539705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.021 BaseBdev1 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.021 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.021 [ 00:09:07.021 { 00:09:07.021 "name": "BaseBdev1", 00:09:07.021 "aliases": [ 00:09:07.021 "2b0c00ed-b3c3-4108-ba5b-46813ab43b85" 00:09:07.021 ], 00:09:07.021 "product_name": "Malloc disk", 00:09:07.021 "block_size": 512, 00:09:07.021 "num_blocks": 65536, 00:09:07.021 "uuid": "2b0c00ed-b3c3-4108-ba5b-46813ab43b85", 00:09:07.021 "assigned_rate_limits": { 00:09:07.021 "rw_ios_per_sec": 0, 00:09:07.021 "rw_mbytes_per_sec": 0, 00:09:07.021 "r_mbytes_per_sec": 0, 00:09:07.021 "w_mbytes_per_sec": 0 00:09:07.021 }, 00:09:07.021 "claimed": true, 00:09:07.021 "claim_type": "exclusive_write", 00:09:07.021 "zoned": false, 00:09:07.021 "supported_io_types": { 00:09:07.021 "read": true, 00:09:07.021 "write": true, 00:09:07.021 "unmap": true, 00:09:07.021 "flush": true, 00:09:07.021 "reset": true, 00:09:07.021 "nvme_admin": false, 00:09:07.021 "nvme_io": false, 00:09:07.021 "nvme_io_md": false, 00:09:07.021 "write_zeroes": true, 00:09:07.021 "zcopy": true, 00:09:07.021 "get_zone_info": false, 00:09:07.021 "zone_management": false, 00:09:07.021 "zone_append": false, 00:09:07.021 "compare": false, 00:09:07.021 "compare_and_write": false, 00:09:07.021 "abort": true, 00:09:07.021 "seek_hole": false, 00:09:07.021 "seek_data": false, 00:09:07.021 "copy": true, 00:09:07.021 "nvme_iov_md": false 00:09:07.021 }, 00:09:07.022 "memory_domains": [ 00:09:07.022 { 00:09:07.022 "dma_device_id": "system", 00:09:07.022 "dma_device_type": 1 00:09:07.022 }, 00:09:07.022 { 00:09:07.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.022 "dma_device_type": 2 00:09:07.022 } 00:09:07.022 ], 00:09:07.022 "driver_specific": {} 00:09:07.022 } 00:09:07.022 ] 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.022 "name": "Existed_Raid", 00:09:07.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.022 "strip_size_kb": 0, 00:09:07.022 "state": "configuring", 00:09:07.022 "raid_level": "raid1", 00:09:07.022 "superblock": false, 00:09:07.022 "num_base_bdevs": 3, 00:09:07.022 "num_base_bdevs_discovered": 1, 00:09:07.022 "num_base_bdevs_operational": 3, 00:09:07.022 "base_bdevs_list": [ 00:09:07.022 { 00:09:07.022 "name": "BaseBdev1", 00:09:07.022 "uuid": "2b0c00ed-b3c3-4108-ba5b-46813ab43b85", 00:09:07.022 "is_configured": true, 00:09:07.022 "data_offset": 0, 00:09:07.022 "data_size": 65536 00:09:07.022 }, 00:09:07.022 { 00:09:07.022 "name": "BaseBdev2", 00:09:07.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.022 "is_configured": false, 00:09:07.022 "data_offset": 0, 00:09:07.022 "data_size": 0 00:09:07.022 }, 00:09:07.022 { 00:09:07.022 "name": "BaseBdev3", 00:09:07.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.022 "is_configured": false, 00:09:07.022 "data_offset": 0, 00:09:07.022 "data_size": 0 00:09:07.022 } 00:09:07.022 ] 00:09:07.022 }' 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.022 13:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.590 [2024-11-20 13:22:49.030936] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.590 [2024-11-20 13:22:49.031076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.590 [2024-11-20 13:22:49.038936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.590 [2024-11-20 13:22:49.041025] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.590 [2024-11-20 13:22:49.041101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.590 [2024-11-20 13:22:49.041146] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.590 [2024-11-20 13:22:49.041174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.590 "name": "Existed_Raid", 00:09:07.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.590 "strip_size_kb": 0, 00:09:07.590 "state": "configuring", 00:09:07.590 "raid_level": "raid1", 00:09:07.590 "superblock": false, 00:09:07.590 "num_base_bdevs": 3, 00:09:07.590 "num_base_bdevs_discovered": 1, 00:09:07.590 "num_base_bdevs_operational": 3, 00:09:07.590 "base_bdevs_list": [ 00:09:07.590 { 00:09:07.590 "name": "BaseBdev1", 00:09:07.590 "uuid": "2b0c00ed-b3c3-4108-ba5b-46813ab43b85", 00:09:07.590 "is_configured": true, 00:09:07.590 "data_offset": 0, 00:09:07.590 "data_size": 65536 00:09:07.590 }, 00:09:07.590 { 00:09:07.590 "name": "BaseBdev2", 00:09:07.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.590 "is_configured": false, 00:09:07.590 "data_offset": 0, 00:09:07.590 "data_size": 0 00:09:07.590 }, 00:09:07.590 { 00:09:07.590 "name": "BaseBdev3", 00:09:07.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.590 "is_configured": false, 00:09:07.590 "data_offset": 0, 00:09:07.590 "data_size": 0 00:09:07.590 } 00:09:07.590 ] 00:09:07.590 }' 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.590 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.850 [2024-11-20 13:22:49.437434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.850 BaseBdev2 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.850 [ 00:09:07.850 { 00:09:07.850 "name": "BaseBdev2", 00:09:07.850 "aliases": [ 00:09:07.850 "5dcaabfb-c20f-4886-974e-5b78ea466bbf" 00:09:07.850 ], 00:09:07.850 "product_name": "Malloc disk", 00:09:07.850 "block_size": 512, 00:09:07.850 "num_blocks": 65536, 00:09:07.850 "uuid": "5dcaabfb-c20f-4886-974e-5b78ea466bbf", 00:09:07.850 "assigned_rate_limits": { 00:09:07.850 "rw_ios_per_sec": 0, 00:09:07.850 "rw_mbytes_per_sec": 0, 00:09:07.850 "r_mbytes_per_sec": 0, 00:09:07.850 "w_mbytes_per_sec": 0 00:09:07.850 }, 00:09:07.850 "claimed": true, 00:09:07.850 "claim_type": "exclusive_write", 00:09:07.850 "zoned": false, 00:09:07.850 "supported_io_types": { 00:09:07.850 "read": true, 00:09:07.850 "write": true, 00:09:07.850 "unmap": true, 00:09:07.850 "flush": true, 00:09:07.850 "reset": true, 00:09:07.850 "nvme_admin": false, 00:09:07.850 "nvme_io": false, 00:09:07.850 "nvme_io_md": false, 00:09:07.850 "write_zeroes": true, 00:09:07.850 "zcopy": true, 00:09:07.850 "get_zone_info": false, 00:09:07.850 "zone_management": false, 00:09:07.850 "zone_append": false, 00:09:07.850 "compare": false, 00:09:07.850 "compare_and_write": false, 00:09:07.850 "abort": true, 00:09:07.850 "seek_hole": false, 00:09:07.850 "seek_data": false, 00:09:07.850 "copy": true, 00:09:07.850 "nvme_iov_md": false 00:09:07.850 }, 00:09:07.850 "memory_domains": [ 00:09:07.850 { 00:09:07.850 "dma_device_id": "system", 00:09:07.850 "dma_device_type": 1 00:09:07.850 }, 00:09:07.850 { 00:09:07.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.850 "dma_device_type": 2 00:09:07.850 } 00:09:07.850 ], 00:09:07.850 "driver_specific": {} 00:09:07.850 } 00:09:07.850 ] 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.850 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.139 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.139 "name": "Existed_Raid", 00:09:08.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.139 "strip_size_kb": 0, 00:09:08.139 "state": "configuring", 00:09:08.139 "raid_level": "raid1", 00:09:08.139 "superblock": false, 00:09:08.139 "num_base_bdevs": 3, 00:09:08.139 "num_base_bdevs_discovered": 2, 00:09:08.139 "num_base_bdevs_operational": 3, 00:09:08.139 "base_bdevs_list": [ 00:09:08.139 { 00:09:08.139 "name": "BaseBdev1", 00:09:08.139 "uuid": "2b0c00ed-b3c3-4108-ba5b-46813ab43b85", 00:09:08.139 "is_configured": true, 00:09:08.139 "data_offset": 0, 00:09:08.139 "data_size": 65536 00:09:08.139 }, 00:09:08.139 { 00:09:08.139 "name": "BaseBdev2", 00:09:08.139 "uuid": "5dcaabfb-c20f-4886-974e-5b78ea466bbf", 00:09:08.139 "is_configured": true, 00:09:08.139 "data_offset": 0, 00:09:08.139 "data_size": 65536 00:09:08.139 }, 00:09:08.139 { 00:09:08.139 "name": "BaseBdev3", 00:09:08.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.139 "is_configured": false, 00:09:08.139 "data_offset": 0, 00:09:08.139 "data_size": 0 00:09:08.139 } 00:09:08.139 ] 00:09:08.139 }' 00:09:08.140 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.140 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.400 [2024-11-20 13:22:49.960563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.400 [2024-11-20 13:22:49.960722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:08.400 [2024-11-20 13:22:49.960764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:08.400 [2024-11-20 13:22:49.961127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:08.400 [2024-11-20 13:22:49.961330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:08.400 [2024-11-20 13:22:49.961378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:08.400 [2024-11-20 13:22:49.961675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.400 BaseBdev3 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.400 13:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.400 [ 00:09:08.400 { 00:09:08.400 "name": "BaseBdev3", 00:09:08.400 "aliases": [ 00:09:08.400 "faf9ab9c-985c-4b7a-89ac-8e8ba6fa85aa" 00:09:08.400 ], 00:09:08.400 "product_name": "Malloc disk", 00:09:08.400 "block_size": 512, 00:09:08.400 "num_blocks": 65536, 00:09:08.400 "uuid": "faf9ab9c-985c-4b7a-89ac-8e8ba6fa85aa", 00:09:08.400 "assigned_rate_limits": { 00:09:08.400 "rw_ios_per_sec": 0, 00:09:08.400 "rw_mbytes_per_sec": 0, 00:09:08.400 "r_mbytes_per_sec": 0, 00:09:08.400 "w_mbytes_per_sec": 0 00:09:08.400 }, 00:09:08.400 "claimed": true, 00:09:08.400 "claim_type": "exclusive_write", 00:09:08.400 "zoned": false, 00:09:08.400 "supported_io_types": { 00:09:08.400 "read": true, 00:09:08.400 "write": true, 00:09:08.400 "unmap": true, 00:09:08.400 "flush": true, 00:09:08.400 "reset": true, 00:09:08.400 "nvme_admin": false, 00:09:08.400 "nvme_io": false, 00:09:08.400 "nvme_io_md": false, 00:09:08.400 "write_zeroes": true, 00:09:08.400 "zcopy": true, 00:09:08.400 "get_zone_info": false, 00:09:08.400 "zone_management": false, 00:09:08.400 "zone_append": false, 00:09:08.400 "compare": false, 00:09:08.400 "compare_and_write": false, 00:09:08.400 "abort": true, 00:09:08.400 "seek_hole": false, 00:09:08.400 "seek_data": false, 00:09:08.400 "copy": true, 00:09:08.400 "nvme_iov_md": false 00:09:08.400 }, 00:09:08.400 "memory_domains": [ 00:09:08.400 { 00:09:08.400 "dma_device_id": "system", 00:09:08.400 "dma_device_type": 1 00:09:08.400 }, 00:09:08.400 { 00:09:08.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.400 "dma_device_type": 2 00:09:08.400 } 00:09:08.400 ], 00:09:08.400 "driver_specific": {} 00:09:08.400 } 00:09:08.400 ] 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.400 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.400 "name": "Existed_Raid", 00:09:08.400 "uuid": "25675b57-61c8-4801-9dc6-0aa3698fb796", 00:09:08.400 "strip_size_kb": 0, 00:09:08.400 "state": "online", 00:09:08.400 "raid_level": "raid1", 00:09:08.400 "superblock": false, 00:09:08.400 "num_base_bdevs": 3, 00:09:08.400 "num_base_bdevs_discovered": 3, 00:09:08.400 "num_base_bdevs_operational": 3, 00:09:08.400 "base_bdevs_list": [ 00:09:08.400 { 00:09:08.400 "name": "BaseBdev1", 00:09:08.400 "uuid": "2b0c00ed-b3c3-4108-ba5b-46813ab43b85", 00:09:08.400 "is_configured": true, 00:09:08.400 "data_offset": 0, 00:09:08.400 "data_size": 65536 00:09:08.400 }, 00:09:08.400 { 00:09:08.400 "name": "BaseBdev2", 00:09:08.400 "uuid": "5dcaabfb-c20f-4886-974e-5b78ea466bbf", 00:09:08.400 "is_configured": true, 00:09:08.400 "data_offset": 0, 00:09:08.400 "data_size": 65536 00:09:08.400 }, 00:09:08.400 { 00:09:08.400 "name": "BaseBdev3", 00:09:08.400 "uuid": "faf9ab9c-985c-4b7a-89ac-8e8ba6fa85aa", 00:09:08.401 "is_configured": true, 00:09:08.401 "data_offset": 0, 00:09:08.401 "data_size": 65536 00:09:08.401 } 00:09:08.401 ] 00:09:08.401 }' 00:09:08.401 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.401 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.970 [2024-11-20 13:22:50.444116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.970 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.970 "name": "Existed_Raid", 00:09:08.970 "aliases": [ 00:09:08.970 "25675b57-61c8-4801-9dc6-0aa3698fb796" 00:09:08.970 ], 00:09:08.970 "product_name": "Raid Volume", 00:09:08.970 "block_size": 512, 00:09:08.970 "num_blocks": 65536, 00:09:08.970 "uuid": "25675b57-61c8-4801-9dc6-0aa3698fb796", 00:09:08.970 "assigned_rate_limits": { 00:09:08.970 "rw_ios_per_sec": 0, 00:09:08.970 "rw_mbytes_per_sec": 0, 00:09:08.970 "r_mbytes_per_sec": 0, 00:09:08.970 "w_mbytes_per_sec": 0 00:09:08.970 }, 00:09:08.970 "claimed": false, 00:09:08.970 "zoned": false, 00:09:08.970 "supported_io_types": { 00:09:08.970 "read": true, 00:09:08.970 "write": true, 00:09:08.970 "unmap": false, 00:09:08.970 "flush": false, 00:09:08.970 "reset": true, 00:09:08.970 "nvme_admin": false, 00:09:08.970 "nvme_io": false, 00:09:08.970 "nvme_io_md": false, 00:09:08.970 "write_zeroes": true, 00:09:08.970 "zcopy": false, 00:09:08.970 "get_zone_info": false, 00:09:08.970 "zone_management": false, 00:09:08.970 "zone_append": false, 00:09:08.970 "compare": false, 00:09:08.970 "compare_and_write": false, 00:09:08.970 "abort": false, 00:09:08.970 "seek_hole": false, 00:09:08.970 "seek_data": false, 00:09:08.970 "copy": false, 00:09:08.970 "nvme_iov_md": false 00:09:08.970 }, 00:09:08.970 "memory_domains": [ 00:09:08.970 { 00:09:08.970 "dma_device_id": "system", 00:09:08.970 "dma_device_type": 1 00:09:08.970 }, 00:09:08.970 { 00:09:08.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.970 "dma_device_type": 2 00:09:08.970 }, 00:09:08.970 { 00:09:08.970 "dma_device_id": "system", 00:09:08.970 "dma_device_type": 1 00:09:08.970 }, 00:09:08.970 { 00:09:08.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.970 "dma_device_type": 2 00:09:08.970 }, 00:09:08.970 { 00:09:08.970 "dma_device_id": "system", 00:09:08.970 "dma_device_type": 1 00:09:08.970 }, 00:09:08.970 { 00:09:08.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.970 "dma_device_type": 2 00:09:08.970 } 00:09:08.970 ], 00:09:08.970 "driver_specific": { 00:09:08.970 "raid": { 00:09:08.970 "uuid": "25675b57-61c8-4801-9dc6-0aa3698fb796", 00:09:08.970 "strip_size_kb": 0, 00:09:08.970 "state": "online", 00:09:08.970 "raid_level": "raid1", 00:09:08.970 "superblock": false, 00:09:08.970 "num_base_bdevs": 3, 00:09:08.970 "num_base_bdevs_discovered": 3, 00:09:08.970 "num_base_bdevs_operational": 3, 00:09:08.970 "base_bdevs_list": [ 00:09:08.971 { 00:09:08.971 "name": "BaseBdev1", 00:09:08.971 "uuid": "2b0c00ed-b3c3-4108-ba5b-46813ab43b85", 00:09:08.971 "is_configured": true, 00:09:08.971 "data_offset": 0, 00:09:08.971 "data_size": 65536 00:09:08.971 }, 00:09:08.971 { 00:09:08.971 "name": "BaseBdev2", 00:09:08.971 "uuid": "5dcaabfb-c20f-4886-974e-5b78ea466bbf", 00:09:08.971 "is_configured": true, 00:09:08.971 "data_offset": 0, 00:09:08.971 "data_size": 65536 00:09:08.971 }, 00:09:08.971 { 00:09:08.971 "name": "BaseBdev3", 00:09:08.971 "uuid": "faf9ab9c-985c-4b7a-89ac-8e8ba6fa85aa", 00:09:08.971 "is_configured": true, 00:09:08.971 "data_offset": 0, 00:09:08.971 "data_size": 65536 00:09:08.971 } 00:09:08.971 ] 00:09:08.971 } 00:09:08.971 } 00:09:08.971 }' 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:08.971 BaseBdev2 00:09:08.971 BaseBdev3' 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.971 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.231 [2024-11-20 13:22:50.703425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.231 "name": "Existed_Raid", 00:09:09.231 "uuid": "25675b57-61c8-4801-9dc6-0aa3698fb796", 00:09:09.231 "strip_size_kb": 0, 00:09:09.231 "state": "online", 00:09:09.231 "raid_level": "raid1", 00:09:09.231 "superblock": false, 00:09:09.231 "num_base_bdevs": 3, 00:09:09.231 "num_base_bdevs_discovered": 2, 00:09:09.231 "num_base_bdevs_operational": 2, 00:09:09.231 "base_bdevs_list": [ 00:09:09.231 { 00:09:09.231 "name": null, 00:09:09.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.231 "is_configured": false, 00:09:09.231 "data_offset": 0, 00:09:09.231 "data_size": 65536 00:09:09.231 }, 00:09:09.231 { 00:09:09.231 "name": "BaseBdev2", 00:09:09.231 "uuid": "5dcaabfb-c20f-4886-974e-5b78ea466bbf", 00:09:09.231 "is_configured": true, 00:09:09.231 "data_offset": 0, 00:09:09.231 "data_size": 65536 00:09:09.231 }, 00:09:09.231 { 00:09:09.231 "name": "BaseBdev3", 00:09:09.231 "uuid": "faf9ab9c-985c-4b7a-89ac-8e8ba6fa85aa", 00:09:09.231 "is_configured": true, 00:09:09.231 "data_offset": 0, 00:09:09.231 "data_size": 65536 00:09:09.231 } 00:09:09.231 ] 00:09:09.231 }' 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.231 13:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.491 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:09.491 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 [2024-11-20 13:22:51.210482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 [2024-11-20 13:22:51.296959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:09.752 [2024-11-20 13:22:51.297168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.752 [2024-11-20 13:22:51.318684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.752 [2024-11-20 13:22:51.318821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.752 [2024-11-20 13:22:51.318886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 BaseBdev2 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.752 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.014 [ 00:09:10.014 { 00:09:10.014 "name": "BaseBdev2", 00:09:10.014 "aliases": [ 00:09:10.014 "ae25fb96-9c0f-41f5-b572-ac09cb331e1f" 00:09:10.014 ], 00:09:10.014 "product_name": "Malloc disk", 00:09:10.014 "block_size": 512, 00:09:10.014 "num_blocks": 65536, 00:09:10.014 "uuid": "ae25fb96-9c0f-41f5-b572-ac09cb331e1f", 00:09:10.014 "assigned_rate_limits": { 00:09:10.014 "rw_ios_per_sec": 0, 00:09:10.014 "rw_mbytes_per_sec": 0, 00:09:10.014 "r_mbytes_per_sec": 0, 00:09:10.014 "w_mbytes_per_sec": 0 00:09:10.014 }, 00:09:10.014 "claimed": false, 00:09:10.014 "zoned": false, 00:09:10.014 "supported_io_types": { 00:09:10.014 "read": true, 00:09:10.014 "write": true, 00:09:10.014 "unmap": true, 00:09:10.014 "flush": true, 00:09:10.014 "reset": true, 00:09:10.014 "nvme_admin": false, 00:09:10.014 "nvme_io": false, 00:09:10.014 "nvme_io_md": false, 00:09:10.014 "write_zeroes": true, 00:09:10.014 "zcopy": true, 00:09:10.014 "get_zone_info": false, 00:09:10.014 "zone_management": false, 00:09:10.014 "zone_append": false, 00:09:10.014 "compare": false, 00:09:10.014 "compare_and_write": false, 00:09:10.014 "abort": true, 00:09:10.014 "seek_hole": false, 00:09:10.014 "seek_data": false, 00:09:10.014 "copy": true, 00:09:10.014 "nvme_iov_md": false 00:09:10.014 }, 00:09:10.014 "memory_domains": [ 00:09:10.014 { 00:09:10.014 "dma_device_id": "system", 00:09:10.014 "dma_device_type": 1 00:09:10.014 }, 00:09:10.014 { 00:09:10.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.014 "dma_device_type": 2 00:09:10.014 } 00:09:10.014 ], 00:09:10.014 "driver_specific": {} 00:09:10.014 } 00:09:10.014 ] 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.014 BaseBdev3 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.014 [ 00:09:10.014 { 00:09:10.014 "name": "BaseBdev3", 00:09:10.014 "aliases": [ 00:09:10.014 "91834e1c-b4a7-4bb3-b4dc-7daa783524c7" 00:09:10.014 ], 00:09:10.014 "product_name": "Malloc disk", 00:09:10.014 "block_size": 512, 00:09:10.014 "num_blocks": 65536, 00:09:10.014 "uuid": "91834e1c-b4a7-4bb3-b4dc-7daa783524c7", 00:09:10.014 "assigned_rate_limits": { 00:09:10.014 "rw_ios_per_sec": 0, 00:09:10.014 "rw_mbytes_per_sec": 0, 00:09:10.014 "r_mbytes_per_sec": 0, 00:09:10.014 "w_mbytes_per_sec": 0 00:09:10.014 }, 00:09:10.014 "claimed": false, 00:09:10.014 "zoned": false, 00:09:10.014 "supported_io_types": { 00:09:10.014 "read": true, 00:09:10.014 "write": true, 00:09:10.014 "unmap": true, 00:09:10.014 "flush": true, 00:09:10.014 "reset": true, 00:09:10.014 "nvme_admin": false, 00:09:10.014 "nvme_io": false, 00:09:10.014 "nvme_io_md": false, 00:09:10.014 "write_zeroes": true, 00:09:10.014 "zcopy": true, 00:09:10.014 "get_zone_info": false, 00:09:10.014 "zone_management": false, 00:09:10.014 "zone_append": false, 00:09:10.014 "compare": false, 00:09:10.014 "compare_and_write": false, 00:09:10.014 "abort": true, 00:09:10.014 "seek_hole": false, 00:09:10.014 "seek_data": false, 00:09:10.014 "copy": true, 00:09:10.014 "nvme_iov_md": false 00:09:10.014 }, 00:09:10.014 "memory_domains": [ 00:09:10.014 { 00:09:10.014 "dma_device_id": "system", 00:09:10.014 "dma_device_type": 1 00:09:10.014 }, 00:09:10.014 { 00:09:10.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.014 "dma_device_type": 2 00:09:10.014 } 00:09:10.014 ], 00:09:10.014 "driver_specific": {} 00:09:10.014 } 00:09:10.014 ] 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.014 [2024-11-20 13:22:51.496942] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.014 [2024-11-20 13:22:51.497104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.014 [2024-11-20 13:22:51.497148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.014 [2024-11-20 13:22:51.499440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.014 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.014 "name": "Existed_Raid", 00:09:10.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.014 "strip_size_kb": 0, 00:09:10.014 "state": "configuring", 00:09:10.014 "raid_level": "raid1", 00:09:10.014 "superblock": false, 00:09:10.014 "num_base_bdevs": 3, 00:09:10.014 "num_base_bdevs_discovered": 2, 00:09:10.014 "num_base_bdevs_operational": 3, 00:09:10.014 "base_bdevs_list": [ 00:09:10.014 { 00:09:10.014 "name": "BaseBdev1", 00:09:10.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.014 "is_configured": false, 00:09:10.014 "data_offset": 0, 00:09:10.014 "data_size": 0 00:09:10.014 }, 00:09:10.014 { 00:09:10.014 "name": "BaseBdev2", 00:09:10.014 "uuid": "ae25fb96-9c0f-41f5-b572-ac09cb331e1f", 00:09:10.014 "is_configured": true, 00:09:10.014 "data_offset": 0, 00:09:10.014 "data_size": 65536 00:09:10.014 }, 00:09:10.014 { 00:09:10.014 "name": "BaseBdev3", 00:09:10.014 "uuid": "91834e1c-b4a7-4bb3-b4dc-7daa783524c7", 00:09:10.014 "is_configured": true, 00:09:10.014 "data_offset": 0, 00:09:10.015 "data_size": 65536 00:09:10.015 } 00:09:10.015 ] 00:09:10.015 }' 00:09:10.015 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.015 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.585 [2024-11-20 13:22:51.964172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.585 13:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.585 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.585 "name": "Existed_Raid", 00:09:10.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.585 "strip_size_kb": 0, 00:09:10.585 "state": "configuring", 00:09:10.585 "raid_level": "raid1", 00:09:10.585 "superblock": false, 00:09:10.585 "num_base_bdevs": 3, 00:09:10.585 "num_base_bdevs_discovered": 1, 00:09:10.585 "num_base_bdevs_operational": 3, 00:09:10.585 "base_bdevs_list": [ 00:09:10.585 { 00:09:10.585 "name": "BaseBdev1", 00:09:10.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.585 "is_configured": false, 00:09:10.585 "data_offset": 0, 00:09:10.585 "data_size": 0 00:09:10.585 }, 00:09:10.585 { 00:09:10.585 "name": null, 00:09:10.585 "uuid": "ae25fb96-9c0f-41f5-b572-ac09cb331e1f", 00:09:10.585 "is_configured": false, 00:09:10.585 "data_offset": 0, 00:09:10.585 "data_size": 65536 00:09:10.585 }, 00:09:10.585 { 00:09:10.585 "name": "BaseBdev3", 00:09:10.585 "uuid": "91834e1c-b4a7-4bb3-b4dc-7daa783524c7", 00:09:10.585 "is_configured": true, 00:09:10.585 "data_offset": 0, 00:09:10.585 "data_size": 65536 00:09:10.585 } 00:09:10.585 ] 00:09:10.585 }' 00:09:10.585 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.585 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 [2024-11-20 13:22:52.454408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.846 BaseBdev1 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.846 [ 00:09:10.846 { 00:09:10.846 "name": "BaseBdev1", 00:09:10.846 "aliases": [ 00:09:10.846 "f2538c84-15ad-46b3-b711-0fac3f8d4766" 00:09:10.846 ], 00:09:10.846 "product_name": "Malloc disk", 00:09:10.846 "block_size": 512, 00:09:10.846 "num_blocks": 65536, 00:09:10.846 "uuid": "f2538c84-15ad-46b3-b711-0fac3f8d4766", 00:09:10.846 "assigned_rate_limits": { 00:09:10.846 "rw_ios_per_sec": 0, 00:09:10.846 "rw_mbytes_per_sec": 0, 00:09:10.846 "r_mbytes_per_sec": 0, 00:09:10.846 "w_mbytes_per_sec": 0 00:09:10.846 }, 00:09:10.846 "claimed": true, 00:09:10.846 "claim_type": "exclusive_write", 00:09:10.846 "zoned": false, 00:09:10.846 "supported_io_types": { 00:09:10.846 "read": true, 00:09:10.846 "write": true, 00:09:10.846 "unmap": true, 00:09:10.846 "flush": true, 00:09:10.846 "reset": true, 00:09:10.846 "nvme_admin": false, 00:09:10.846 "nvme_io": false, 00:09:10.846 "nvme_io_md": false, 00:09:10.846 "write_zeroes": true, 00:09:10.846 "zcopy": true, 00:09:10.846 "get_zone_info": false, 00:09:10.846 "zone_management": false, 00:09:10.846 "zone_append": false, 00:09:10.846 "compare": false, 00:09:10.846 "compare_and_write": false, 00:09:10.846 "abort": true, 00:09:10.846 "seek_hole": false, 00:09:10.846 "seek_data": false, 00:09:10.846 "copy": true, 00:09:10.846 "nvme_iov_md": false 00:09:10.846 }, 00:09:10.846 "memory_domains": [ 00:09:10.846 { 00:09:10.846 "dma_device_id": "system", 00:09:10.846 "dma_device_type": 1 00:09:10.846 }, 00:09:10.846 { 00:09:10.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.846 "dma_device_type": 2 00:09:10.846 } 00:09:10.846 ], 00:09:10.846 "driver_specific": {} 00:09:10.846 } 00:09:10.846 ] 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.846 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.107 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.107 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.107 "name": "Existed_Raid", 00:09:11.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.107 "strip_size_kb": 0, 00:09:11.107 "state": "configuring", 00:09:11.107 "raid_level": "raid1", 00:09:11.107 "superblock": false, 00:09:11.107 "num_base_bdevs": 3, 00:09:11.107 "num_base_bdevs_discovered": 2, 00:09:11.107 "num_base_bdevs_operational": 3, 00:09:11.107 "base_bdevs_list": [ 00:09:11.107 { 00:09:11.107 "name": "BaseBdev1", 00:09:11.107 "uuid": "f2538c84-15ad-46b3-b711-0fac3f8d4766", 00:09:11.107 "is_configured": true, 00:09:11.107 "data_offset": 0, 00:09:11.107 "data_size": 65536 00:09:11.107 }, 00:09:11.107 { 00:09:11.107 "name": null, 00:09:11.107 "uuid": "ae25fb96-9c0f-41f5-b572-ac09cb331e1f", 00:09:11.107 "is_configured": false, 00:09:11.107 "data_offset": 0, 00:09:11.107 "data_size": 65536 00:09:11.107 }, 00:09:11.107 { 00:09:11.107 "name": "BaseBdev3", 00:09:11.107 "uuid": "91834e1c-b4a7-4bb3-b4dc-7daa783524c7", 00:09:11.107 "is_configured": true, 00:09:11.107 "data_offset": 0, 00:09:11.107 "data_size": 65536 00:09:11.107 } 00:09:11.107 ] 00:09:11.107 }' 00:09:11.107 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.107 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.367 [2024-11-20 13:22:52.989572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.367 13:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.367 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.367 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.367 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.367 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.367 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.626 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.626 "name": "Existed_Raid", 00:09:11.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.626 "strip_size_kb": 0, 00:09:11.626 "state": "configuring", 00:09:11.626 "raid_level": "raid1", 00:09:11.626 "superblock": false, 00:09:11.626 "num_base_bdevs": 3, 00:09:11.626 "num_base_bdevs_discovered": 1, 00:09:11.626 "num_base_bdevs_operational": 3, 00:09:11.626 "base_bdevs_list": [ 00:09:11.626 { 00:09:11.626 "name": "BaseBdev1", 00:09:11.626 "uuid": "f2538c84-15ad-46b3-b711-0fac3f8d4766", 00:09:11.626 "is_configured": true, 00:09:11.626 "data_offset": 0, 00:09:11.626 "data_size": 65536 00:09:11.626 }, 00:09:11.626 { 00:09:11.626 "name": null, 00:09:11.626 "uuid": "ae25fb96-9c0f-41f5-b572-ac09cb331e1f", 00:09:11.626 "is_configured": false, 00:09:11.626 "data_offset": 0, 00:09:11.626 "data_size": 65536 00:09:11.626 }, 00:09:11.626 { 00:09:11.626 "name": null, 00:09:11.626 "uuid": "91834e1c-b4a7-4bb3-b4dc-7daa783524c7", 00:09:11.626 "is_configured": false, 00:09:11.626 "data_offset": 0, 00:09:11.626 "data_size": 65536 00:09:11.626 } 00:09:11.626 ] 00:09:11.626 }' 00:09:11.626 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.626 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.886 [2024-11-20 13:22:53.476735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.886 "name": "Existed_Raid", 00:09:11.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.886 "strip_size_kb": 0, 00:09:11.886 "state": "configuring", 00:09:11.886 "raid_level": "raid1", 00:09:11.886 "superblock": false, 00:09:11.886 "num_base_bdevs": 3, 00:09:11.886 "num_base_bdevs_discovered": 2, 00:09:11.886 "num_base_bdevs_operational": 3, 00:09:11.886 "base_bdevs_list": [ 00:09:11.886 { 00:09:11.886 "name": "BaseBdev1", 00:09:11.886 "uuid": "f2538c84-15ad-46b3-b711-0fac3f8d4766", 00:09:11.886 "is_configured": true, 00:09:11.886 "data_offset": 0, 00:09:11.886 "data_size": 65536 00:09:11.886 }, 00:09:11.886 { 00:09:11.886 "name": null, 00:09:11.886 "uuid": "ae25fb96-9c0f-41f5-b572-ac09cb331e1f", 00:09:11.886 "is_configured": false, 00:09:11.886 "data_offset": 0, 00:09:11.886 "data_size": 65536 00:09:11.886 }, 00:09:11.886 { 00:09:11.886 "name": "BaseBdev3", 00:09:11.886 "uuid": "91834e1c-b4a7-4bb3-b4dc-7daa783524c7", 00:09:11.886 "is_configured": true, 00:09:11.886 "data_offset": 0, 00:09:11.886 "data_size": 65536 00:09:11.886 } 00:09:11.886 ] 00:09:11.886 }' 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.886 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.456 [2024-11-20 13:22:53.955921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.456 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.457 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.457 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.457 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.457 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.457 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.457 13:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.457 13:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.457 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.457 "name": "Existed_Raid", 00:09:12.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.457 "strip_size_kb": 0, 00:09:12.457 "state": "configuring", 00:09:12.457 "raid_level": "raid1", 00:09:12.457 "superblock": false, 00:09:12.457 "num_base_bdevs": 3, 00:09:12.457 "num_base_bdevs_discovered": 1, 00:09:12.457 "num_base_bdevs_operational": 3, 00:09:12.457 "base_bdevs_list": [ 00:09:12.457 { 00:09:12.457 "name": null, 00:09:12.457 "uuid": "f2538c84-15ad-46b3-b711-0fac3f8d4766", 00:09:12.457 "is_configured": false, 00:09:12.457 "data_offset": 0, 00:09:12.457 "data_size": 65536 00:09:12.457 }, 00:09:12.457 { 00:09:12.457 "name": null, 00:09:12.457 "uuid": "ae25fb96-9c0f-41f5-b572-ac09cb331e1f", 00:09:12.457 "is_configured": false, 00:09:12.457 "data_offset": 0, 00:09:12.457 "data_size": 65536 00:09:12.457 }, 00:09:12.457 { 00:09:12.457 "name": "BaseBdev3", 00:09:12.457 "uuid": "91834e1c-b4a7-4bb3-b4dc-7daa783524c7", 00:09:12.457 "is_configured": true, 00:09:12.457 "data_offset": 0, 00:09:12.457 "data_size": 65536 00:09:12.457 } 00:09:12.457 ] 00:09:12.457 }' 00:09:12.457 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.457 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.026 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.026 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.026 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.026 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.026 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.026 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:13.026 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.027 [2024-11-20 13:22:54.429590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.027 "name": "Existed_Raid", 00:09:13.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.027 "strip_size_kb": 0, 00:09:13.027 "state": "configuring", 00:09:13.027 "raid_level": "raid1", 00:09:13.027 "superblock": false, 00:09:13.027 "num_base_bdevs": 3, 00:09:13.027 "num_base_bdevs_discovered": 2, 00:09:13.027 "num_base_bdevs_operational": 3, 00:09:13.027 "base_bdevs_list": [ 00:09:13.027 { 00:09:13.027 "name": null, 00:09:13.027 "uuid": "f2538c84-15ad-46b3-b711-0fac3f8d4766", 00:09:13.027 "is_configured": false, 00:09:13.027 "data_offset": 0, 00:09:13.027 "data_size": 65536 00:09:13.027 }, 00:09:13.027 { 00:09:13.027 "name": "BaseBdev2", 00:09:13.027 "uuid": "ae25fb96-9c0f-41f5-b572-ac09cb331e1f", 00:09:13.027 "is_configured": true, 00:09:13.027 "data_offset": 0, 00:09:13.027 "data_size": 65536 00:09:13.027 }, 00:09:13.027 { 00:09:13.027 "name": "BaseBdev3", 00:09:13.027 "uuid": "91834e1c-b4a7-4bb3-b4dc-7daa783524c7", 00:09:13.027 "is_configured": true, 00:09:13.027 "data_offset": 0, 00:09:13.027 "data_size": 65536 00:09:13.027 } 00:09:13.027 ] 00:09:13.027 }' 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.027 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.286 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.286 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.286 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.286 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.286 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.286 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:13.286 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.287 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.287 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.287 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:13.287 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f2538c84-15ad-46b3-b711-0fac3f8d4766 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.547 [2024-11-20 13:22:54.987771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:13.547 NewBaseBdev 00:09:13.547 [2024-11-20 13:22:54.987887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:13.547 [2024-11-20 13:22:54.987900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:13.547 [2024-11-20 13:22:54.988219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:13.547 [2024-11-20 13:22:54.988355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:13.547 [2024-11-20 13:22:54.988370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:13.547 [2024-11-20 13:22:54.988567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.547 13:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:13.547 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.547 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.547 [ 00:09:13.547 { 00:09:13.547 "name": "NewBaseBdev", 00:09:13.547 "aliases": [ 00:09:13.547 "f2538c84-15ad-46b3-b711-0fac3f8d4766" 00:09:13.547 ], 00:09:13.547 "product_name": "Malloc disk", 00:09:13.547 "block_size": 512, 00:09:13.547 "num_blocks": 65536, 00:09:13.547 "uuid": "f2538c84-15ad-46b3-b711-0fac3f8d4766", 00:09:13.547 "assigned_rate_limits": { 00:09:13.547 "rw_ios_per_sec": 0, 00:09:13.547 "rw_mbytes_per_sec": 0, 00:09:13.547 "r_mbytes_per_sec": 0, 00:09:13.547 "w_mbytes_per_sec": 0 00:09:13.547 }, 00:09:13.547 "claimed": true, 00:09:13.547 "claim_type": "exclusive_write", 00:09:13.547 "zoned": false, 00:09:13.547 "supported_io_types": { 00:09:13.547 "read": true, 00:09:13.547 "write": true, 00:09:13.547 "unmap": true, 00:09:13.547 "flush": true, 00:09:13.547 "reset": true, 00:09:13.547 "nvme_admin": false, 00:09:13.547 "nvme_io": false, 00:09:13.547 "nvme_io_md": false, 00:09:13.547 "write_zeroes": true, 00:09:13.547 "zcopy": true, 00:09:13.547 "get_zone_info": false, 00:09:13.547 "zone_management": false, 00:09:13.547 "zone_append": false, 00:09:13.547 "compare": false, 00:09:13.547 "compare_and_write": false, 00:09:13.547 "abort": true, 00:09:13.547 "seek_hole": false, 00:09:13.547 "seek_data": false, 00:09:13.548 "copy": true, 00:09:13.548 "nvme_iov_md": false 00:09:13.548 }, 00:09:13.548 "memory_domains": [ 00:09:13.548 { 00:09:13.548 "dma_device_id": "system", 00:09:13.548 "dma_device_type": 1 00:09:13.548 }, 00:09:13.548 { 00:09:13.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.548 "dma_device_type": 2 00:09:13.548 } 00:09:13.548 ], 00:09:13.548 "driver_specific": {} 00:09:13.548 } 00:09:13.548 ] 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.548 "name": "Existed_Raid", 00:09:13.548 "uuid": "5eec5d84-c07b-4594-b3a1-3d3ddf75c789", 00:09:13.548 "strip_size_kb": 0, 00:09:13.548 "state": "online", 00:09:13.548 "raid_level": "raid1", 00:09:13.548 "superblock": false, 00:09:13.548 "num_base_bdevs": 3, 00:09:13.548 "num_base_bdevs_discovered": 3, 00:09:13.548 "num_base_bdevs_operational": 3, 00:09:13.548 "base_bdevs_list": [ 00:09:13.548 { 00:09:13.548 "name": "NewBaseBdev", 00:09:13.548 "uuid": "f2538c84-15ad-46b3-b711-0fac3f8d4766", 00:09:13.548 "is_configured": true, 00:09:13.548 "data_offset": 0, 00:09:13.548 "data_size": 65536 00:09:13.548 }, 00:09:13.548 { 00:09:13.548 "name": "BaseBdev2", 00:09:13.548 "uuid": "ae25fb96-9c0f-41f5-b572-ac09cb331e1f", 00:09:13.548 "is_configured": true, 00:09:13.548 "data_offset": 0, 00:09:13.548 "data_size": 65536 00:09:13.548 }, 00:09:13.548 { 00:09:13.548 "name": "BaseBdev3", 00:09:13.548 "uuid": "91834e1c-b4a7-4bb3-b4dc-7daa783524c7", 00:09:13.548 "is_configured": true, 00:09:13.548 "data_offset": 0, 00:09:13.548 "data_size": 65536 00:09:13.548 } 00:09:13.548 ] 00:09:13.548 }' 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.548 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.808 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:13.808 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:13.808 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.808 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.808 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.808 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.808 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.808 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:13.808 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.808 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.808 [2024-11-20 13:22:55.459363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.068 "name": "Existed_Raid", 00:09:14.068 "aliases": [ 00:09:14.068 "5eec5d84-c07b-4594-b3a1-3d3ddf75c789" 00:09:14.068 ], 00:09:14.068 "product_name": "Raid Volume", 00:09:14.068 "block_size": 512, 00:09:14.068 "num_blocks": 65536, 00:09:14.068 "uuid": "5eec5d84-c07b-4594-b3a1-3d3ddf75c789", 00:09:14.068 "assigned_rate_limits": { 00:09:14.068 "rw_ios_per_sec": 0, 00:09:14.068 "rw_mbytes_per_sec": 0, 00:09:14.068 "r_mbytes_per_sec": 0, 00:09:14.068 "w_mbytes_per_sec": 0 00:09:14.068 }, 00:09:14.068 "claimed": false, 00:09:14.068 "zoned": false, 00:09:14.068 "supported_io_types": { 00:09:14.068 "read": true, 00:09:14.068 "write": true, 00:09:14.068 "unmap": false, 00:09:14.068 "flush": false, 00:09:14.068 "reset": true, 00:09:14.068 "nvme_admin": false, 00:09:14.068 "nvme_io": false, 00:09:14.068 "nvme_io_md": false, 00:09:14.068 "write_zeroes": true, 00:09:14.068 "zcopy": false, 00:09:14.068 "get_zone_info": false, 00:09:14.068 "zone_management": false, 00:09:14.068 "zone_append": false, 00:09:14.068 "compare": false, 00:09:14.068 "compare_and_write": false, 00:09:14.068 "abort": false, 00:09:14.068 "seek_hole": false, 00:09:14.068 "seek_data": false, 00:09:14.068 "copy": false, 00:09:14.068 "nvme_iov_md": false 00:09:14.068 }, 00:09:14.068 "memory_domains": [ 00:09:14.068 { 00:09:14.068 "dma_device_id": "system", 00:09:14.068 "dma_device_type": 1 00:09:14.068 }, 00:09:14.068 { 00:09:14.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.068 "dma_device_type": 2 00:09:14.068 }, 00:09:14.068 { 00:09:14.068 "dma_device_id": "system", 00:09:14.068 "dma_device_type": 1 00:09:14.068 }, 00:09:14.068 { 00:09:14.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.068 "dma_device_type": 2 00:09:14.068 }, 00:09:14.068 { 00:09:14.068 "dma_device_id": "system", 00:09:14.068 "dma_device_type": 1 00:09:14.068 }, 00:09:14.068 { 00:09:14.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.068 "dma_device_type": 2 00:09:14.068 } 00:09:14.068 ], 00:09:14.068 "driver_specific": { 00:09:14.068 "raid": { 00:09:14.068 "uuid": "5eec5d84-c07b-4594-b3a1-3d3ddf75c789", 00:09:14.068 "strip_size_kb": 0, 00:09:14.068 "state": "online", 00:09:14.068 "raid_level": "raid1", 00:09:14.068 "superblock": false, 00:09:14.068 "num_base_bdevs": 3, 00:09:14.068 "num_base_bdevs_discovered": 3, 00:09:14.068 "num_base_bdevs_operational": 3, 00:09:14.068 "base_bdevs_list": [ 00:09:14.068 { 00:09:14.068 "name": "NewBaseBdev", 00:09:14.068 "uuid": "f2538c84-15ad-46b3-b711-0fac3f8d4766", 00:09:14.068 "is_configured": true, 00:09:14.068 "data_offset": 0, 00:09:14.068 "data_size": 65536 00:09:14.068 }, 00:09:14.068 { 00:09:14.068 "name": "BaseBdev2", 00:09:14.068 "uuid": "ae25fb96-9c0f-41f5-b572-ac09cb331e1f", 00:09:14.068 "is_configured": true, 00:09:14.068 "data_offset": 0, 00:09:14.068 "data_size": 65536 00:09:14.068 }, 00:09:14.068 { 00:09:14.068 "name": "BaseBdev3", 00:09:14.068 "uuid": "91834e1c-b4a7-4bb3-b4dc-7daa783524c7", 00:09:14.068 "is_configured": true, 00:09:14.068 "data_offset": 0, 00:09:14.068 "data_size": 65536 00:09:14.068 } 00:09:14.068 ] 00:09:14.068 } 00:09:14.068 } 00:09:14.068 }' 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:14.068 BaseBdev2 00:09:14.068 BaseBdev3' 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.068 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.068 [2024-11-20 13:22:55.730550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.068 [2024-11-20 13:22:55.730618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.069 [2024-11-20 13:22:55.730709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.069 [2024-11-20 13:22:55.731002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.069 [2024-11-20 13:22:55.731071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78153 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 78153 ']' 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 78153 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78153 00:09:14.328 killing process with pid 78153 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78153' 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 78153 00:09:14.328 [2024-11-20 13:22:55.765505] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.328 13:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 78153 00:09:14.328 [2024-11-20 13:22:55.796239] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:14.589 00:09:14.589 real 0m8.911s 00:09:14.589 user 0m15.248s 00:09:14.589 sys 0m1.779s 00:09:14.589 ************************************ 00:09:14.589 END TEST raid_state_function_test 00:09:14.589 ************************************ 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.589 13:22:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:14.589 13:22:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:14.589 13:22:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.589 13:22:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.589 ************************************ 00:09:14.589 START TEST raid_state_function_test_sb 00:09:14.589 ************************************ 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78757 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78757' 00:09:14.589 Process raid pid: 78757 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78757 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78757 ']' 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.589 13:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.589 [2024-11-20 13:22:56.153555] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:14.589 [2024-11-20 13:22:56.153775] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.849 [2024-11-20 13:22:56.308908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.849 [2024-11-20 13:22:56.334046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.849 [2024-11-20 13:22:56.376800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.849 [2024-11-20 13:22:56.376921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.462 13:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.462 13:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:15.462 13:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:15.462 13:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.462 13:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.462 [2024-11-20 13:22:57.002432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.462 [2024-11-20 13:22:57.002561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.462 [2024-11-20 13:22:57.002608] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.462 [2024-11-20 13:22:57.002645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.462 [2024-11-20 13:22:57.002662] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.462 [2024-11-20 13:22:57.002685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.462 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.462 "name": "Existed_Raid", 00:09:15.462 "uuid": "fcf70e30-c290-4871-a012-df3e4dba82c8", 00:09:15.462 "strip_size_kb": 0, 00:09:15.462 "state": "configuring", 00:09:15.462 "raid_level": "raid1", 00:09:15.462 "superblock": true, 00:09:15.462 "num_base_bdevs": 3, 00:09:15.462 "num_base_bdevs_discovered": 0, 00:09:15.462 "num_base_bdevs_operational": 3, 00:09:15.463 "base_bdevs_list": [ 00:09:15.463 { 00:09:15.463 "name": "BaseBdev1", 00:09:15.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.463 "is_configured": false, 00:09:15.463 "data_offset": 0, 00:09:15.463 "data_size": 0 00:09:15.463 }, 00:09:15.463 { 00:09:15.463 "name": "BaseBdev2", 00:09:15.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.463 "is_configured": false, 00:09:15.463 "data_offset": 0, 00:09:15.463 "data_size": 0 00:09:15.463 }, 00:09:15.463 { 00:09:15.463 "name": "BaseBdev3", 00:09:15.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.463 "is_configured": false, 00:09:15.463 "data_offset": 0, 00:09:15.463 "data_size": 0 00:09:15.463 } 00:09:15.463 ] 00:09:15.463 }' 00:09:15.463 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.463 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.032 [2024-11-20 13:22:57.461557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.032 [2024-11-20 13:22:57.461639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.032 [2024-11-20 13:22:57.469571] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.032 [2024-11-20 13:22:57.469649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.032 [2024-11-20 13:22:57.469661] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.032 [2024-11-20 13:22:57.469686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.032 [2024-11-20 13:22:57.469692] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.032 [2024-11-20 13:22:57.469700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.032 BaseBdev1 00:09:16.032 [2024-11-20 13:22:57.486549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:16.032 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.033 [ 00:09:16.033 { 00:09:16.033 "name": "BaseBdev1", 00:09:16.033 "aliases": [ 00:09:16.033 "1efad6b1-d59b-4e92-adc3-3d909e8c3f19" 00:09:16.033 ], 00:09:16.033 "product_name": "Malloc disk", 00:09:16.033 "block_size": 512, 00:09:16.033 "num_blocks": 65536, 00:09:16.033 "uuid": "1efad6b1-d59b-4e92-adc3-3d909e8c3f19", 00:09:16.033 "assigned_rate_limits": { 00:09:16.033 "rw_ios_per_sec": 0, 00:09:16.033 "rw_mbytes_per_sec": 0, 00:09:16.033 "r_mbytes_per_sec": 0, 00:09:16.033 "w_mbytes_per_sec": 0 00:09:16.033 }, 00:09:16.033 "claimed": true, 00:09:16.033 "claim_type": "exclusive_write", 00:09:16.033 "zoned": false, 00:09:16.033 "supported_io_types": { 00:09:16.033 "read": true, 00:09:16.033 "write": true, 00:09:16.033 "unmap": true, 00:09:16.033 "flush": true, 00:09:16.033 "reset": true, 00:09:16.033 "nvme_admin": false, 00:09:16.033 "nvme_io": false, 00:09:16.033 "nvme_io_md": false, 00:09:16.033 "write_zeroes": true, 00:09:16.033 "zcopy": true, 00:09:16.033 "get_zone_info": false, 00:09:16.033 "zone_management": false, 00:09:16.033 "zone_append": false, 00:09:16.033 "compare": false, 00:09:16.033 "compare_and_write": false, 00:09:16.033 "abort": true, 00:09:16.033 "seek_hole": false, 00:09:16.033 "seek_data": false, 00:09:16.033 "copy": true, 00:09:16.033 "nvme_iov_md": false 00:09:16.033 }, 00:09:16.033 "memory_domains": [ 00:09:16.033 { 00:09:16.033 "dma_device_id": "system", 00:09:16.033 "dma_device_type": 1 00:09:16.033 }, 00:09:16.033 { 00:09:16.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.033 "dma_device_type": 2 00:09:16.033 } 00:09:16.033 ], 00:09:16.033 "driver_specific": {} 00:09:16.033 } 00:09:16.033 ] 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.033 "name": "Existed_Raid", 00:09:16.033 "uuid": "71c39de5-b426-4eaf-901b-6f6fef13cc8c", 00:09:16.033 "strip_size_kb": 0, 00:09:16.033 "state": "configuring", 00:09:16.033 "raid_level": "raid1", 00:09:16.033 "superblock": true, 00:09:16.033 "num_base_bdevs": 3, 00:09:16.033 "num_base_bdevs_discovered": 1, 00:09:16.033 "num_base_bdevs_operational": 3, 00:09:16.033 "base_bdevs_list": [ 00:09:16.033 { 00:09:16.033 "name": "BaseBdev1", 00:09:16.033 "uuid": "1efad6b1-d59b-4e92-adc3-3d909e8c3f19", 00:09:16.033 "is_configured": true, 00:09:16.033 "data_offset": 2048, 00:09:16.033 "data_size": 63488 00:09:16.033 }, 00:09:16.033 { 00:09:16.033 "name": "BaseBdev2", 00:09:16.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.033 "is_configured": false, 00:09:16.033 "data_offset": 0, 00:09:16.033 "data_size": 0 00:09:16.033 }, 00:09:16.033 { 00:09:16.033 "name": "BaseBdev3", 00:09:16.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.033 "is_configured": false, 00:09:16.033 "data_offset": 0, 00:09:16.033 "data_size": 0 00:09:16.033 } 00:09:16.033 ] 00:09:16.033 }' 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.033 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.602 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.602 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.602 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.602 [2024-11-20 13:22:57.989718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.602 [2024-11-20 13:22:57.989816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:16.602 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.602 13:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.602 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.602 13:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.602 [2024-11-20 13:22:58.001723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.602 [2024-11-20 13:22:58.003669] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.602 [2024-11-20 13:22:58.003748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.602 [2024-11-20 13:22:58.003792] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.602 [2024-11-20 13:22:58.003817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.602 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.602 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:16.602 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.602 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:16.602 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.602 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.603 "name": "Existed_Raid", 00:09:16.603 "uuid": "2a2dbe83-db47-4c52-9066-894e4b8b7465", 00:09:16.603 "strip_size_kb": 0, 00:09:16.603 "state": "configuring", 00:09:16.603 "raid_level": "raid1", 00:09:16.603 "superblock": true, 00:09:16.603 "num_base_bdevs": 3, 00:09:16.603 "num_base_bdevs_discovered": 1, 00:09:16.603 "num_base_bdevs_operational": 3, 00:09:16.603 "base_bdevs_list": [ 00:09:16.603 { 00:09:16.603 "name": "BaseBdev1", 00:09:16.603 "uuid": "1efad6b1-d59b-4e92-adc3-3d909e8c3f19", 00:09:16.603 "is_configured": true, 00:09:16.603 "data_offset": 2048, 00:09:16.603 "data_size": 63488 00:09:16.603 }, 00:09:16.603 { 00:09:16.603 "name": "BaseBdev2", 00:09:16.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.603 "is_configured": false, 00:09:16.603 "data_offset": 0, 00:09:16.603 "data_size": 0 00:09:16.603 }, 00:09:16.603 { 00:09:16.603 "name": "BaseBdev3", 00:09:16.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.603 "is_configured": false, 00:09:16.603 "data_offset": 0, 00:09:16.603 "data_size": 0 00:09:16.603 } 00:09:16.603 ] 00:09:16.603 }' 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.603 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.863 [2024-11-20 13:22:58.424050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.863 BaseBdev2 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.863 [ 00:09:16.863 { 00:09:16.863 "name": "BaseBdev2", 00:09:16.863 "aliases": [ 00:09:16.863 "3370216c-f0cb-4b73-b4eb-352b2098b992" 00:09:16.863 ], 00:09:16.863 "product_name": "Malloc disk", 00:09:16.863 "block_size": 512, 00:09:16.863 "num_blocks": 65536, 00:09:16.863 "uuid": "3370216c-f0cb-4b73-b4eb-352b2098b992", 00:09:16.863 "assigned_rate_limits": { 00:09:16.863 "rw_ios_per_sec": 0, 00:09:16.863 "rw_mbytes_per_sec": 0, 00:09:16.863 "r_mbytes_per_sec": 0, 00:09:16.863 "w_mbytes_per_sec": 0 00:09:16.863 }, 00:09:16.863 "claimed": true, 00:09:16.863 "claim_type": "exclusive_write", 00:09:16.863 "zoned": false, 00:09:16.863 "supported_io_types": { 00:09:16.863 "read": true, 00:09:16.863 "write": true, 00:09:16.863 "unmap": true, 00:09:16.863 "flush": true, 00:09:16.863 "reset": true, 00:09:16.863 "nvme_admin": false, 00:09:16.863 "nvme_io": false, 00:09:16.863 "nvme_io_md": false, 00:09:16.863 "write_zeroes": true, 00:09:16.863 "zcopy": true, 00:09:16.863 "get_zone_info": false, 00:09:16.863 "zone_management": false, 00:09:16.863 "zone_append": false, 00:09:16.863 "compare": false, 00:09:16.863 "compare_and_write": false, 00:09:16.863 "abort": true, 00:09:16.863 "seek_hole": false, 00:09:16.863 "seek_data": false, 00:09:16.863 "copy": true, 00:09:16.863 "nvme_iov_md": false 00:09:16.863 }, 00:09:16.863 "memory_domains": [ 00:09:16.863 { 00:09:16.863 "dma_device_id": "system", 00:09:16.863 "dma_device_type": 1 00:09:16.863 }, 00:09:16.863 { 00:09:16.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.863 "dma_device_type": 2 00:09:16.863 } 00:09:16.863 ], 00:09:16.863 "driver_specific": {} 00:09:16.863 } 00:09:16.863 ] 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.863 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.864 "name": "Existed_Raid", 00:09:16.864 "uuid": "2a2dbe83-db47-4c52-9066-894e4b8b7465", 00:09:16.864 "strip_size_kb": 0, 00:09:16.864 "state": "configuring", 00:09:16.864 "raid_level": "raid1", 00:09:16.864 "superblock": true, 00:09:16.864 "num_base_bdevs": 3, 00:09:16.864 "num_base_bdevs_discovered": 2, 00:09:16.864 "num_base_bdevs_operational": 3, 00:09:16.864 "base_bdevs_list": [ 00:09:16.864 { 00:09:16.864 "name": "BaseBdev1", 00:09:16.864 "uuid": "1efad6b1-d59b-4e92-adc3-3d909e8c3f19", 00:09:16.864 "is_configured": true, 00:09:16.864 "data_offset": 2048, 00:09:16.864 "data_size": 63488 00:09:16.864 }, 00:09:16.864 { 00:09:16.864 "name": "BaseBdev2", 00:09:16.864 "uuid": "3370216c-f0cb-4b73-b4eb-352b2098b992", 00:09:16.864 "is_configured": true, 00:09:16.864 "data_offset": 2048, 00:09:16.864 "data_size": 63488 00:09:16.864 }, 00:09:16.864 { 00:09:16.864 "name": "BaseBdev3", 00:09:16.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.864 "is_configured": false, 00:09:16.864 "data_offset": 0, 00:09:16.864 "data_size": 0 00:09:16.864 } 00:09:16.864 ] 00:09:16.864 }' 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.864 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.435 [2024-11-20 13:22:58.914234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.435 [2024-11-20 13:22:58.915023] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:17.435 BaseBdev3 00:09:17.435 [2024-11-20 13:22:58.915212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:17.435 [2024-11-20 13:22:58.916288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:17.435 [2024-11-20 13:22:58.916781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.435 [2024-11-20 13:22:58.916824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:17.435 [2024-11-20 13:22:58.917265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.435 [ 00:09:17.435 { 00:09:17.435 "name": "BaseBdev3", 00:09:17.435 "aliases": [ 00:09:17.435 "7bac10f3-00cd-4572-9ce2-30a2af04d668" 00:09:17.435 ], 00:09:17.435 "product_name": "Malloc disk", 00:09:17.435 "block_size": 512, 00:09:17.435 "num_blocks": 65536, 00:09:17.435 "uuid": "7bac10f3-00cd-4572-9ce2-30a2af04d668", 00:09:17.435 "assigned_rate_limits": { 00:09:17.435 "rw_ios_per_sec": 0, 00:09:17.435 "rw_mbytes_per_sec": 0, 00:09:17.435 "r_mbytes_per_sec": 0, 00:09:17.435 "w_mbytes_per_sec": 0 00:09:17.435 }, 00:09:17.435 "claimed": true, 00:09:17.435 "claim_type": "exclusive_write", 00:09:17.435 "zoned": false, 00:09:17.435 "supported_io_types": { 00:09:17.435 "read": true, 00:09:17.435 "write": true, 00:09:17.435 "unmap": true, 00:09:17.435 "flush": true, 00:09:17.435 "reset": true, 00:09:17.435 "nvme_admin": false, 00:09:17.435 "nvme_io": false, 00:09:17.435 "nvme_io_md": false, 00:09:17.435 "write_zeroes": true, 00:09:17.435 "zcopy": true, 00:09:17.435 "get_zone_info": false, 00:09:17.435 "zone_management": false, 00:09:17.435 "zone_append": false, 00:09:17.435 "compare": false, 00:09:17.435 "compare_and_write": false, 00:09:17.435 "abort": true, 00:09:17.435 "seek_hole": false, 00:09:17.435 "seek_data": false, 00:09:17.435 "copy": true, 00:09:17.435 "nvme_iov_md": false 00:09:17.435 }, 00:09:17.435 "memory_domains": [ 00:09:17.435 { 00:09:17.435 "dma_device_id": "system", 00:09:17.435 "dma_device_type": 1 00:09:17.435 }, 00:09:17.435 { 00:09:17.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.435 "dma_device_type": 2 00:09:17.435 } 00:09:17.435 ], 00:09:17.435 "driver_specific": {} 00:09:17.435 } 00:09:17.435 ] 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.435 13:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.435 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.435 "name": "Existed_Raid", 00:09:17.435 "uuid": "2a2dbe83-db47-4c52-9066-894e4b8b7465", 00:09:17.435 "strip_size_kb": 0, 00:09:17.435 "state": "online", 00:09:17.435 "raid_level": "raid1", 00:09:17.435 "superblock": true, 00:09:17.435 "num_base_bdevs": 3, 00:09:17.435 "num_base_bdevs_discovered": 3, 00:09:17.435 "num_base_bdevs_operational": 3, 00:09:17.435 "base_bdevs_list": [ 00:09:17.435 { 00:09:17.435 "name": "BaseBdev1", 00:09:17.435 "uuid": "1efad6b1-d59b-4e92-adc3-3d909e8c3f19", 00:09:17.435 "is_configured": true, 00:09:17.435 "data_offset": 2048, 00:09:17.435 "data_size": 63488 00:09:17.435 }, 00:09:17.435 { 00:09:17.435 "name": "BaseBdev2", 00:09:17.435 "uuid": "3370216c-f0cb-4b73-b4eb-352b2098b992", 00:09:17.435 "is_configured": true, 00:09:17.435 "data_offset": 2048, 00:09:17.435 "data_size": 63488 00:09:17.435 }, 00:09:17.435 { 00:09:17.435 "name": "BaseBdev3", 00:09:17.435 "uuid": "7bac10f3-00cd-4572-9ce2-30a2af04d668", 00:09:17.435 "is_configured": true, 00:09:17.435 "data_offset": 2048, 00:09:17.435 "data_size": 63488 00:09:17.435 } 00:09:17.435 ] 00:09:17.435 }' 00:09:17.435 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.435 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.017 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:18.017 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:18.017 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.017 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.017 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.017 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.018 [2024-11-20 13:22:59.429713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.018 "name": "Existed_Raid", 00:09:18.018 "aliases": [ 00:09:18.018 "2a2dbe83-db47-4c52-9066-894e4b8b7465" 00:09:18.018 ], 00:09:18.018 "product_name": "Raid Volume", 00:09:18.018 "block_size": 512, 00:09:18.018 "num_blocks": 63488, 00:09:18.018 "uuid": "2a2dbe83-db47-4c52-9066-894e4b8b7465", 00:09:18.018 "assigned_rate_limits": { 00:09:18.018 "rw_ios_per_sec": 0, 00:09:18.018 "rw_mbytes_per_sec": 0, 00:09:18.018 "r_mbytes_per_sec": 0, 00:09:18.018 "w_mbytes_per_sec": 0 00:09:18.018 }, 00:09:18.018 "claimed": false, 00:09:18.018 "zoned": false, 00:09:18.018 "supported_io_types": { 00:09:18.018 "read": true, 00:09:18.018 "write": true, 00:09:18.018 "unmap": false, 00:09:18.018 "flush": false, 00:09:18.018 "reset": true, 00:09:18.018 "nvme_admin": false, 00:09:18.018 "nvme_io": false, 00:09:18.018 "nvme_io_md": false, 00:09:18.018 "write_zeroes": true, 00:09:18.018 "zcopy": false, 00:09:18.018 "get_zone_info": false, 00:09:18.018 "zone_management": false, 00:09:18.018 "zone_append": false, 00:09:18.018 "compare": false, 00:09:18.018 "compare_and_write": false, 00:09:18.018 "abort": false, 00:09:18.018 "seek_hole": false, 00:09:18.018 "seek_data": false, 00:09:18.018 "copy": false, 00:09:18.018 "nvme_iov_md": false 00:09:18.018 }, 00:09:18.018 "memory_domains": [ 00:09:18.018 { 00:09:18.018 "dma_device_id": "system", 00:09:18.018 "dma_device_type": 1 00:09:18.018 }, 00:09:18.018 { 00:09:18.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.018 "dma_device_type": 2 00:09:18.018 }, 00:09:18.018 { 00:09:18.018 "dma_device_id": "system", 00:09:18.018 "dma_device_type": 1 00:09:18.018 }, 00:09:18.018 { 00:09:18.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.018 "dma_device_type": 2 00:09:18.018 }, 00:09:18.018 { 00:09:18.018 "dma_device_id": "system", 00:09:18.018 "dma_device_type": 1 00:09:18.018 }, 00:09:18.018 { 00:09:18.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.018 "dma_device_type": 2 00:09:18.018 } 00:09:18.018 ], 00:09:18.018 "driver_specific": { 00:09:18.018 "raid": { 00:09:18.018 "uuid": "2a2dbe83-db47-4c52-9066-894e4b8b7465", 00:09:18.018 "strip_size_kb": 0, 00:09:18.018 "state": "online", 00:09:18.018 "raid_level": "raid1", 00:09:18.018 "superblock": true, 00:09:18.018 "num_base_bdevs": 3, 00:09:18.018 "num_base_bdevs_discovered": 3, 00:09:18.018 "num_base_bdevs_operational": 3, 00:09:18.018 "base_bdevs_list": [ 00:09:18.018 { 00:09:18.018 "name": "BaseBdev1", 00:09:18.018 "uuid": "1efad6b1-d59b-4e92-adc3-3d909e8c3f19", 00:09:18.018 "is_configured": true, 00:09:18.018 "data_offset": 2048, 00:09:18.018 "data_size": 63488 00:09:18.018 }, 00:09:18.018 { 00:09:18.018 "name": "BaseBdev2", 00:09:18.018 "uuid": "3370216c-f0cb-4b73-b4eb-352b2098b992", 00:09:18.018 "is_configured": true, 00:09:18.018 "data_offset": 2048, 00:09:18.018 "data_size": 63488 00:09:18.018 }, 00:09:18.018 { 00:09:18.018 "name": "BaseBdev3", 00:09:18.018 "uuid": "7bac10f3-00cd-4572-9ce2-30a2af04d668", 00:09:18.018 "is_configured": true, 00:09:18.018 "data_offset": 2048, 00:09:18.018 "data_size": 63488 00:09:18.018 } 00:09:18.018 ] 00:09:18.018 } 00:09:18.018 } 00:09:18.018 }' 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:18.018 BaseBdev2 00:09:18.018 BaseBdev3' 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.018 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.278 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.278 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:18.278 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.279 [2024-11-20 13:22:59.692984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.279 "name": "Existed_Raid", 00:09:18.279 "uuid": "2a2dbe83-db47-4c52-9066-894e4b8b7465", 00:09:18.279 "strip_size_kb": 0, 00:09:18.279 "state": "online", 00:09:18.279 "raid_level": "raid1", 00:09:18.279 "superblock": true, 00:09:18.279 "num_base_bdevs": 3, 00:09:18.279 "num_base_bdevs_discovered": 2, 00:09:18.279 "num_base_bdevs_operational": 2, 00:09:18.279 "base_bdevs_list": [ 00:09:18.279 { 00:09:18.279 "name": null, 00:09:18.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.279 "is_configured": false, 00:09:18.279 "data_offset": 0, 00:09:18.279 "data_size": 63488 00:09:18.279 }, 00:09:18.279 { 00:09:18.279 "name": "BaseBdev2", 00:09:18.279 "uuid": "3370216c-f0cb-4b73-b4eb-352b2098b992", 00:09:18.279 "is_configured": true, 00:09:18.279 "data_offset": 2048, 00:09:18.279 "data_size": 63488 00:09:18.279 }, 00:09:18.279 { 00:09:18.279 "name": "BaseBdev3", 00:09:18.279 "uuid": "7bac10f3-00cd-4572-9ce2-30a2af04d668", 00:09:18.279 "is_configured": true, 00:09:18.279 "data_offset": 2048, 00:09:18.279 "data_size": 63488 00:09:18.279 } 00:09:18.279 ] 00:09:18.279 }' 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.279 13:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.539 [2024-11-20 13:23:00.135718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.539 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.539 [2024-11-20 13:23:00.202606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.539 [2024-11-20 13:23:00.202779] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.799 [2024-11-20 13:23:00.214688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.799 [2024-11-20 13:23:00.214843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.799 [2024-11-20 13:23:00.214903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.799 BaseBdev2 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.799 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.800 [ 00:09:18.800 { 00:09:18.800 "name": "BaseBdev2", 00:09:18.800 "aliases": [ 00:09:18.800 "22810dd5-3784-4026-a46b-a67ba4fce95d" 00:09:18.800 ], 00:09:18.800 "product_name": "Malloc disk", 00:09:18.800 "block_size": 512, 00:09:18.800 "num_blocks": 65536, 00:09:18.800 "uuid": "22810dd5-3784-4026-a46b-a67ba4fce95d", 00:09:18.800 "assigned_rate_limits": { 00:09:18.800 "rw_ios_per_sec": 0, 00:09:18.800 "rw_mbytes_per_sec": 0, 00:09:18.800 "r_mbytes_per_sec": 0, 00:09:18.800 "w_mbytes_per_sec": 0 00:09:18.800 }, 00:09:18.800 "claimed": false, 00:09:18.800 "zoned": false, 00:09:18.800 "supported_io_types": { 00:09:18.800 "read": true, 00:09:18.800 "write": true, 00:09:18.800 "unmap": true, 00:09:18.800 "flush": true, 00:09:18.800 "reset": true, 00:09:18.800 "nvme_admin": false, 00:09:18.800 "nvme_io": false, 00:09:18.800 "nvme_io_md": false, 00:09:18.800 "write_zeroes": true, 00:09:18.800 "zcopy": true, 00:09:18.800 "get_zone_info": false, 00:09:18.800 "zone_management": false, 00:09:18.800 "zone_append": false, 00:09:18.800 "compare": false, 00:09:18.800 "compare_and_write": false, 00:09:18.800 "abort": true, 00:09:18.800 "seek_hole": false, 00:09:18.800 "seek_data": false, 00:09:18.800 "copy": true, 00:09:18.800 "nvme_iov_md": false 00:09:18.800 }, 00:09:18.800 "memory_domains": [ 00:09:18.800 { 00:09:18.800 "dma_device_id": "system", 00:09:18.800 "dma_device_type": 1 00:09:18.800 }, 00:09:18.800 { 00:09:18.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.800 "dma_device_type": 2 00:09:18.800 } 00:09:18.800 ], 00:09:18.800 "driver_specific": {} 00:09:18.800 } 00:09:18.800 ] 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.800 BaseBdev3 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.800 [ 00:09:18.800 { 00:09:18.800 "name": "BaseBdev3", 00:09:18.800 "aliases": [ 00:09:18.800 "52016f76-0dee-415f-ba65-fe4d5c53bfa7" 00:09:18.800 ], 00:09:18.800 "product_name": "Malloc disk", 00:09:18.800 "block_size": 512, 00:09:18.800 "num_blocks": 65536, 00:09:18.800 "uuid": "52016f76-0dee-415f-ba65-fe4d5c53bfa7", 00:09:18.800 "assigned_rate_limits": { 00:09:18.800 "rw_ios_per_sec": 0, 00:09:18.800 "rw_mbytes_per_sec": 0, 00:09:18.800 "r_mbytes_per_sec": 0, 00:09:18.800 "w_mbytes_per_sec": 0 00:09:18.800 }, 00:09:18.800 "claimed": false, 00:09:18.800 "zoned": false, 00:09:18.800 "supported_io_types": { 00:09:18.800 "read": true, 00:09:18.800 "write": true, 00:09:18.800 "unmap": true, 00:09:18.800 "flush": true, 00:09:18.800 "reset": true, 00:09:18.800 "nvme_admin": false, 00:09:18.800 "nvme_io": false, 00:09:18.800 "nvme_io_md": false, 00:09:18.800 "write_zeroes": true, 00:09:18.800 "zcopy": true, 00:09:18.800 "get_zone_info": false, 00:09:18.800 "zone_management": false, 00:09:18.800 "zone_append": false, 00:09:18.800 "compare": false, 00:09:18.800 "compare_and_write": false, 00:09:18.800 "abort": true, 00:09:18.800 "seek_hole": false, 00:09:18.800 "seek_data": false, 00:09:18.800 "copy": true, 00:09:18.800 "nvme_iov_md": false 00:09:18.800 }, 00:09:18.800 "memory_domains": [ 00:09:18.800 { 00:09:18.800 "dma_device_id": "system", 00:09:18.800 "dma_device_type": 1 00:09:18.800 }, 00:09:18.800 { 00:09:18.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.800 "dma_device_type": 2 00:09:18.800 } 00:09:18.800 ], 00:09:18.800 "driver_specific": {} 00:09:18.800 } 00:09:18.800 ] 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.800 [2024-11-20 13:23:00.363384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.800 [2024-11-20 13:23:00.363472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.800 [2024-11-20 13:23:00.363509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.800 [2024-11-20 13:23:00.365387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.800 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.801 "name": "Existed_Raid", 00:09:18.801 "uuid": "a17e4b55-f786-479d-87a9-54c03844cd96", 00:09:18.801 "strip_size_kb": 0, 00:09:18.801 "state": "configuring", 00:09:18.801 "raid_level": "raid1", 00:09:18.801 "superblock": true, 00:09:18.801 "num_base_bdevs": 3, 00:09:18.801 "num_base_bdevs_discovered": 2, 00:09:18.801 "num_base_bdevs_operational": 3, 00:09:18.801 "base_bdevs_list": [ 00:09:18.801 { 00:09:18.801 "name": "BaseBdev1", 00:09:18.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.801 "is_configured": false, 00:09:18.801 "data_offset": 0, 00:09:18.801 "data_size": 0 00:09:18.801 }, 00:09:18.801 { 00:09:18.801 "name": "BaseBdev2", 00:09:18.801 "uuid": "22810dd5-3784-4026-a46b-a67ba4fce95d", 00:09:18.801 "is_configured": true, 00:09:18.801 "data_offset": 2048, 00:09:18.801 "data_size": 63488 00:09:18.801 }, 00:09:18.801 { 00:09:18.801 "name": "BaseBdev3", 00:09:18.801 "uuid": "52016f76-0dee-415f-ba65-fe4d5c53bfa7", 00:09:18.801 "is_configured": true, 00:09:18.801 "data_offset": 2048, 00:09:18.801 "data_size": 63488 00:09:18.801 } 00:09:18.801 ] 00:09:18.801 }' 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.801 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.371 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:19.371 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.371 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.371 [2024-11-20 13:23:00.810627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:19.371 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.371 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.372 "name": "Existed_Raid", 00:09:19.372 "uuid": "a17e4b55-f786-479d-87a9-54c03844cd96", 00:09:19.372 "strip_size_kb": 0, 00:09:19.372 "state": "configuring", 00:09:19.372 "raid_level": "raid1", 00:09:19.372 "superblock": true, 00:09:19.372 "num_base_bdevs": 3, 00:09:19.372 "num_base_bdevs_discovered": 1, 00:09:19.372 "num_base_bdevs_operational": 3, 00:09:19.372 "base_bdevs_list": [ 00:09:19.372 { 00:09:19.372 "name": "BaseBdev1", 00:09:19.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.372 "is_configured": false, 00:09:19.372 "data_offset": 0, 00:09:19.372 "data_size": 0 00:09:19.372 }, 00:09:19.372 { 00:09:19.372 "name": null, 00:09:19.372 "uuid": "22810dd5-3784-4026-a46b-a67ba4fce95d", 00:09:19.372 "is_configured": false, 00:09:19.372 "data_offset": 0, 00:09:19.372 "data_size": 63488 00:09:19.372 }, 00:09:19.372 { 00:09:19.372 "name": "BaseBdev3", 00:09:19.372 "uuid": "52016f76-0dee-415f-ba65-fe4d5c53bfa7", 00:09:19.372 "is_configured": true, 00:09:19.372 "data_offset": 2048, 00:09:19.372 "data_size": 63488 00:09:19.372 } 00:09:19.372 ] 00:09:19.372 }' 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.372 13:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.631 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.631 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.631 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.631 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:19.631 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.890 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:19.890 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:19.890 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.890 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.890 [2024-11-20 13:23:01.316809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.890 BaseBdev1 00:09:19.890 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.890 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.891 [ 00:09:19.891 { 00:09:19.891 "name": "BaseBdev1", 00:09:19.891 "aliases": [ 00:09:19.891 "15cfad82-df6a-4921-97ac-42a816476191" 00:09:19.891 ], 00:09:19.891 "product_name": "Malloc disk", 00:09:19.891 "block_size": 512, 00:09:19.891 "num_blocks": 65536, 00:09:19.891 "uuid": "15cfad82-df6a-4921-97ac-42a816476191", 00:09:19.891 "assigned_rate_limits": { 00:09:19.891 "rw_ios_per_sec": 0, 00:09:19.891 "rw_mbytes_per_sec": 0, 00:09:19.891 "r_mbytes_per_sec": 0, 00:09:19.891 "w_mbytes_per_sec": 0 00:09:19.891 }, 00:09:19.891 "claimed": true, 00:09:19.891 "claim_type": "exclusive_write", 00:09:19.891 "zoned": false, 00:09:19.891 "supported_io_types": { 00:09:19.891 "read": true, 00:09:19.891 "write": true, 00:09:19.891 "unmap": true, 00:09:19.891 "flush": true, 00:09:19.891 "reset": true, 00:09:19.891 "nvme_admin": false, 00:09:19.891 "nvme_io": false, 00:09:19.891 "nvme_io_md": false, 00:09:19.891 "write_zeroes": true, 00:09:19.891 "zcopy": true, 00:09:19.891 "get_zone_info": false, 00:09:19.891 "zone_management": false, 00:09:19.891 "zone_append": false, 00:09:19.891 "compare": false, 00:09:19.891 "compare_and_write": false, 00:09:19.891 "abort": true, 00:09:19.891 "seek_hole": false, 00:09:19.891 "seek_data": false, 00:09:19.891 "copy": true, 00:09:19.891 "nvme_iov_md": false 00:09:19.891 }, 00:09:19.891 "memory_domains": [ 00:09:19.891 { 00:09:19.891 "dma_device_id": "system", 00:09:19.891 "dma_device_type": 1 00:09:19.891 }, 00:09:19.891 { 00:09:19.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.891 "dma_device_type": 2 00:09:19.891 } 00:09:19.891 ], 00:09:19.891 "driver_specific": {} 00:09:19.891 } 00:09:19.891 ] 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.891 "name": "Existed_Raid", 00:09:19.891 "uuid": "a17e4b55-f786-479d-87a9-54c03844cd96", 00:09:19.891 "strip_size_kb": 0, 00:09:19.891 "state": "configuring", 00:09:19.891 "raid_level": "raid1", 00:09:19.891 "superblock": true, 00:09:19.891 "num_base_bdevs": 3, 00:09:19.891 "num_base_bdevs_discovered": 2, 00:09:19.891 "num_base_bdevs_operational": 3, 00:09:19.891 "base_bdevs_list": [ 00:09:19.891 { 00:09:19.891 "name": "BaseBdev1", 00:09:19.891 "uuid": "15cfad82-df6a-4921-97ac-42a816476191", 00:09:19.891 "is_configured": true, 00:09:19.891 "data_offset": 2048, 00:09:19.891 "data_size": 63488 00:09:19.891 }, 00:09:19.891 { 00:09:19.891 "name": null, 00:09:19.891 "uuid": "22810dd5-3784-4026-a46b-a67ba4fce95d", 00:09:19.891 "is_configured": false, 00:09:19.891 "data_offset": 0, 00:09:19.891 "data_size": 63488 00:09:19.891 }, 00:09:19.891 { 00:09:19.891 "name": "BaseBdev3", 00:09:19.891 "uuid": "52016f76-0dee-415f-ba65-fe4d5c53bfa7", 00:09:19.891 "is_configured": true, 00:09:19.891 "data_offset": 2048, 00:09:19.891 "data_size": 63488 00:09:19.891 } 00:09:19.891 ] 00:09:19.891 }' 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.891 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.151 [2024-11-20 13:23:01.804057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.151 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.411 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.411 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.411 "name": "Existed_Raid", 00:09:20.411 "uuid": "a17e4b55-f786-479d-87a9-54c03844cd96", 00:09:20.411 "strip_size_kb": 0, 00:09:20.411 "state": "configuring", 00:09:20.411 "raid_level": "raid1", 00:09:20.411 "superblock": true, 00:09:20.411 "num_base_bdevs": 3, 00:09:20.411 "num_base_bdevs_discovered": 1, 00:09:20.411 "num_base_bdevs_operational": 3, 00:09:20.411 "base_bdevs_list": [ 00:09:20.411 { 00:09:20.411 "name": "BaseBdev1", 00:09:20.411 "uuid": "15cfad82-df6a-4921-97ac-42a816476191", 00:09:20.411 "is_configured": true, 00:09:20.411 "data_offset": 2048, 00:09:20.411 "data_size": 63488 00:09:20.411 }, 00:09:20.411 { 00:09:20.411 "name": null, 00:09:20.411 "uuid": "22810dd5-3784-4026-a46b-a67ba4fce95d", 00:09:20.411 "is_configured": false, 00:09:20.411 "data_offset": 0, 00:09:20.411 "data_size": 63488 00:09:20.411 }, 00:09:20.411 { 00:09:20.411 "name": null, 00:09:20.411 "uuid": "52016f76-0dee-415f-ba65-fe4d5c53bfa7", 00:09:20.411 "is_configured": false, 00:09:20.411 "data_offset": 0, 00:09:20.411 "data_size": 63488 00:09:20.411 } 00:09:20.411 ] 00:09:20.411 }' 00:09:20.411 13:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.411 13:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.671 [2024-11-20 13:23:02.271461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.671 "name": "Existed_Raid", 00:09:20.671 "uuid": "a17e4b55-f786-479d-87a9-54c03844cd96", 00:09:20.671 "strip_size_kb": 0, 00:09:20.671 "state": "configuring", 00:09:20.671 "raid_level": "raid1", 00:09:20.671 "superblock": true, 00:09:20.671 "num_base_bdevs": 3, 00:09:20.671 "num_base_bdevs_discovered": 2, 00:09:20.671 "num_base_bdevs_operational": 3, 00:09:20.671 "base_bdevs_list": [ 00:09:20.671 { 00:09:20.671 "name": "BaseBdev1", 00:09:20.671 "uuid": "15cfad82-df6a-4921-97ac-42a816476191", 00:09:20.671 "is_configured": true, 00:09:20.671 "data_offset": 2048, 00:09:20.671 "data_size": 63488 00:09:20.671 }, 00:09:20.671 { 00:09:20.671 "name": null, 00:09:20.671 "uuid": "22810dd5-3784-4026-a46b-a67ba4fce95d", 00:09:20.671 "is_configured": false, 00:09:20.671 "data_offset": 0, 00:09:20.671 "data_size": 63488 00:09:20.671 }, 00:09:20.671 { 00:09:20.671 "name": "BaseBdev3", 00:09:20.671 "uuid": "52016f76-0dee-415f-ba65-fe4d5c53bfa7", 00:09:20.671 "is_configured": true, 00:09:20.671 "data_offset": 2048, 00:09:20.671 "data_size": 63488 00:09:20.671 } 00:09:20.671 ] 00:09:20.671 }' 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.671 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.241 [2024-11-20 13:23:02.778602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.241 "name": "Existed_Raid", 00:09:21.241 "uuid": "a17e4b55-f786-479d-87a9-54c03844cd96", 00:09:21.241 "strip_size_kb": 0, 00:09:21.241 "state": "configuring", 00:09:21.241 "raid_level": "raid1", 00:09:21.241 "superblock": true, 00:09:21.241 "num_base_bdevs": 3, 00:09:21.241 "num_base_bdevs_discovered": 1, 00:09:21.241 "num_base_bdevs_operational": 3, 00:09:21.241 "base_bdevs_list": [ 00:09:21.241 { 00:09:21.241 "name": null, 00:09:21.241 "uuid": "15cfad82-df6a-4921-97ac-42a816476191", 00:09:21.241 "is_configured": false, 00:09:21.241 "data_offset": 0, 00:09:21.241 "data_size": 63488 00:09:21.241 }, 00:09:21.241 { 00:09:21.241 "name": null, 00:09:21.241 "uuid": "22810dd5-3784-4026-a46b-a67ba4fce95d", 00:09:21.241 "is_configured": false, 00:09:21.241 "data_offset": 0, 00:09:21.241 "data_size": 63488 00:09:21.241 }, 00:09:21.241 { 00:09:21.241 "name": "BaseBdev3", 00:09:21.241 "uuid": "52016f76-0dee-415f-ba65-fe4d5c53bfa7", 00:09:21.241 "is_configured": true, 00:09:21.241 "data_offset": 2048, 00:09:21.241 "data_size": 63488 00:09:21.241 } 00:09:21.241 ] 00:09:21.241 }' 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.241 13:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.811 [2024-11-20 13:23:03.304414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.811 "name": "Existed_Raid", 00:09:21.811 "uuid": "a17e4b55-f786-479d-87a9-54c03844cd96", 00:09:21.811 "strip_size_kb": 0, 00:09:21.811 "state": "configuring", 00:09:21.811 "raid_level": "raid1", 00:09:21.811 "superblock": true, 00:09:21.811 "num_base_bdevs": 3, 00:09:21.811 "num_base_bdevs_discovered": 2, 00:09:21.811 "num_base_bdevs_operational": 3, 00:09:21.811 "base_bdevs_list": [ 00:09:21.811 { 00:09:21.811 "name": null, 00:09:21.811 "uuid": "15cfad82-df6a-4921-97ac-42a816476191", 00:09:21.811 "is_configured": false, 00:09:21.811 "data_offset": 0, 00:09:21.811 "data_size": 63488 00:09:21.811 }, 00:09:21.811 { 00:09:21.811 "name": "BaseBdev2", 00:09:21.811 "uuid": "22810dd5-3784-4026-a46b-a67ba4fce95d", 00:09:21.811 "is_configured": true, 00:09:21.811 "data_offset": 2048, 00:09:21.811 "data_size": 63488 00:09:21.811 }, 00:09:21.811 { 00:09:21.811 "name": "BaseBdev3", 00:09:21.811 "uuid": "52016f76-0dee-415f-ba65-fe4d5c53bfa7", 00:09:21.811 "is_configured": true, 00:09:21.811 "data_offset": 2048, 00:09:21.811 "data_size": 63488 00:09:21.811 } 00:09:21.811 ] 00:09:21.811 }' 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.811 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.071 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.071 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.071 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.071 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.071 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.071 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:22.071 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.071 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:22.071 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.071 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.331 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.331 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 15cfad82-df6a-4921-97ac-42a816476191 00:09:22.331 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.331 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.331 [2024-11-20 13:23:03.797410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:22.331 [2024-11-20 13:23:03.797739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:22.331 [2024-11-20 13:23:03.797783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:22.331 [2024-11-20 13:23:03.798135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:22.331 [2024-11-20 13:23:03.798322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:22.331 NewBaseBdev 00:09:22.331 [2024-11-20 13:23:03.798380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:22.331 [2024-11-20 13:23:03.798545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.331 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.331 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:22.331 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:22.331 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.331 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:22.331 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.331 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.332 [ 00:09:22.332 { 00:09:22.332 "name": "NewBaseBdev", 00:09:22.332 "aliases": [ 00:09:22.332 "15cfad82-df6a-4921-97ac-42a816476191" 00:09:22.332 ], 00:09:22.332 "product_name": "Malloc disk", 00:09:22.332 "block_size": 512, 00:09:22.332 "num_blocks": 65536, 00:09:22.332 "uuid": "15cfad82-df6a-4921-97ac-42a816476191", 00:09:22.332 "assigned_rate_limits": { 00:09:22.332 "rw_ios_per_sec": 0, 00:09:22.332 "rw_mbytes_per_sec": 0, 00:09:22.332 "r_mbytes_per_sec": 0, 00:09:22.332 "w_mbytes_per_sec": 0 00:09:22.332 }, 00:09:22.332 "claimed": true, 00:09:22.332 "claim_type": "exclusive_write", 00:09:22.332 "zoned": false, 00:09:22.332 "supported_io_types": { 00:09:22.332 "read": true, 00:09:22.332 "write": true, 00:09:22.332 "unmap": true, 00:09:22.332 "flush": true, 00:09:22.332 "reset": true, 00:09:22.332 "nvme_admin": false, 00:09:22.332 "nvme_io": false, 00:09:22.332 "nvme_io_md": false, 00:09:22.332 "write_zeroes": true, 00:09:22.332 "zcopy": true, 00:09:22.332 "get_zone_info": false, 00:09:22.332 "zone_management": false, 00:09:22.332 "zone_append": false, 00:09:22.332 "compare": false, 00:09:22.332 "compare_and_write": false, 00:09:22.332 "abort": true, 00:09:22.332 "seek_hole": false, 00:09:22.332 "seek_data": false, 00:09:22.332 "copy": true, 00:09:22.332 "nvme_iov_md": false 00:09:22.332 }, 00:09:22.332 "memory_domains": [ 00:09:22.332 { 00:09:22.332 "dma_device_id": "system", 00:09:22.332 "dma_device_type": 1 00:09:22.332 }, 00:09:22.332 { 00:09:22.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.332 "dma_device_type": 2 00:09:22.332 } 00:09:22.332 ], 00:09:22.332 "driver_specific": {} 00:09:22.332 } 00:09:22.332 ] 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.332 "name": "Existed_Raid", 00:09:22.332 "uuid": "a17e4b55-f786-479d-87a9-54c03844cd96", 00:09:22.332 "strip_size_kb": 0, 00:09:22.332 "state": "online", 00:09:22.332 "raid_level": "raid1", 00:09:22.332 "superblock": true, 00:09:22.332 "num_base_bdevs": 3, 00:09:22.332 "num_base_bdevs_discovered": 3, 00:09:22.332 "num_base_bdevs_operational": 3, 00:09:22.332 "base_bdevs_list": [ 00:09:22.332 { 00:09:22.332 "name": "NewBaseBdev", 00:09:22.332 "uuid": "15cfad82-df6a-4921-97ac-42a816476191", 00:09:22.332 "is_configured": true, 00:09:22.332 "data_offset": 2048, 00:09:22.332 "data_size": 63488 00:09:22.332 }, 00:09:22.332 { 00:09:22.332 "name": "BaseBdev2", 00:09:22.332 "uuid": "22810dd5-3784-4026-a46b-a67ba4fce95d", 00:09:22.332 "is_configured": true, 00:09:22.332 "data_offset": 2048, 00:09:22.332 "data_size": 63488 00:09:22.332 }, 00:09:22.332 { 00:09:22.332 "name": "BaseBdev3", 00:09:22.332 "uuid": "52016f76-0dee-415f-ba65-fe4d5c53bfa7", 00:09:22.332 "is_configured": true, 00:09:22.332 "data_offset": 2048, 00:09:22.332 "data_size": 63488 00:09:22.332 } 00:09:22.332 ] 00:09:22.332 }' 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.332 13:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.905 [2024-11-20 13:23:04.289084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.905 "name": "Existed_Raid", 00:09:22.905 "aliases": [ 00:09:22.905 "a17e4b55-f786-479d-87a9-54c03844cd96" 00:09:22.905 ], 00:09:22.905 "product_name": "Raid Volume", 00:09:22.905 "block_size": 512, 00:09:22.905 "num_blocks": 63488, 00:09:22.905 "uuid": "a17e4b55-f786-479d-87a9-54c03844cd96", 00:09:22.905 "assigned_rate_limits": { 00:09:22.905 "rw_ios_per_sec": 0, 00:09:22.905 "rw_mbytes_per_sec": 0, 00:09:22.905 "r_mbytes_per_sec": 0, 00:09:22.905 "w_mbytes_per_sec": 0 00:09:22.905 }, 00:09:22.905 "claimed": false, 00:09:22.905 "zoned": false, 00:09:22.905 "supported_io_types": { 00:09:22.905 "read": true, 00:09:22.905 "write": true, 00:09:22.905 "unmap": false, 00:09:22.905 "flush": false, 00:09:22.905 "reset": true, 00:09:22.905 "nvme_admin": false, 00:09:22.905 "nvme_io": false, 00:09:22.905 "nvme_io_md": false, 00:09:22.905 "write_zeroes": true, 00:09:22.905 "zcopy": false, 00:09:22.905 "get_zone_info": false, 00:09:22.905 "zone_management": false, 00:09:22.905 "zone_append": false, 00:09:22.905 "compare": false, 00:09:22.905 "compare_and_write": false, 00:09:22.905 "abort": false, 00:09:22.905 "seek_hole": false, 00:09:22.905 "seek_data": false, 00:09:22.905 "copy": false, 00:09:22.905 "nvme_iov_md": false 00:09:22.905 }, 00:09:22.905 "memory_domains": [ 00:09:22.905 { 00:09:22.905 "dma_device_id": "system", 00:09:22.905 "dma_device_type": 1 00:09:22.905 }, 00:09:22.905 { 00:09:22.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.905 "dma_device_type": 2 00:09:22.905 }, 00:09:22.905 { 00:09:22.905 "dma_device_id": "system", 00:09:22.905 "dma_device_type": 1 00:09:22.905 }, 00:09:22.905 { 00:09:22.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.905 "dma_device_type": 2 00:09:22.905 }, 00:09:22.905 { 00:09:22.905 "dma_device_id": "system", 00:09:22.905 "dma_device_type": 1 00:09:22.905 }, 00:09:22.905 { 00:09:22.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.905 "dma_device_type": 2 00:09:22.905 } 00:09:22.905 ], 00:09:22.905 "driver_specific": { 00:09:22.905 "raid": { 00:09:22.905 "uuid": "a17e4b55-f786-479d-87a9-54c03844cd96", 00:09:22.905 "strip_size_kb": 0, 00:09:22.905 "state": "online", 00:09:22.905 "raid_level": "raid1", 00:09:22.905 "superblock": true, 00:09:22.905 "num_base_bdevs": 3, 00:09:22.905 "num_base_bdevs_discovered": 3, 00:09:22.905 "num_base_bdevs_operational": 3, 00:09:22.905 "base_bdevs_list": [ 00:09:22.905 { 00:09:22.905 "name": "NewBaseBdev", 00:09:22.905 "uuid": "15cfad82-df6a-4921-97ac-42a816476191", 00:09:22.905 "is_configured": true, 00:09:22.905 "data_offset": 2048, 00:09:22.905 "data_size": 63488 00:09:22.905 }, 00:09:22.905 { 00:09:22.905 "name": "BaseBdev2", 00:09:22.905 "uuid": "22810dd5-3784-4026-a46b-a67ba4fce95d", 00:09:22.905 "is_configured": true, 00:09:22.905 "data_offset": 2048, 00:09:22.905 "data_size": 63488 00:09:22.905 }, 00:09:22.905 { 00:09:22.905 "name": "BaseBdev3", 00:09:22.905 "uuid": "52016f76-0dee-415f-ba65-fe4d5c53bfa7", 00:09:22.905 "is_configured": true, 00:09:22.905 "data_offset": 2048, 00:09:22.905 "data_size": 63488 00:09:22.905 } 00:09:22.905 ] 00:09:22.905 } 00:09:22.905 } 00:09:22.905 }' 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:22.905 BaseBdev2 00:09:22.905 BaseBdev3' 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.905 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.165 [2024-11-20 13:23:04.572157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.165 [2024-11-20 13:23:04.572288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.165 [2024-11-20 13:23:04.572405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.166 [2024-11-20 13:23:04.572740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.166 [2024-11-20 13:23:04.572799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78757 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78757 ']' 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 78757 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78757 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78757' 00:09:23.166 killing process with pid 78757 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 78757 00:09:23.166 [2024-11-20 13:23:04.612432] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.166 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 78757 00:09:23.166 [2024-11-20 13:23:04.673334] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.426 13:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:23.426 ************************************ 00:09:23.426 END TEST raid_state_function_test_sb 00:09:23.426 ************************************ 00:09:23.426 00:09:23.426 real 0m8.923s 00:09:23.426 user 0m15.202s 00:09:23.426 sys 0m1.736s 00:09:23.426 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.426 13:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.426 13:23:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:23.426 13:23:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:23.426 13:23:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.426 13:23:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.426 ************************************ 00:09:23.426 START TEST raid_superblock_test 00:09:23.426 ************************************ 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79361 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79361 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 79361 ']' 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.426 13:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.686 [2024-11-20 13:23:05.157099] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:23.686 [2024-11-20 13:23:05.157779] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79361 ] 00:09:23.686 [2024-11-20 13:23:05.294391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.686 [2024-11-20 13:23:05.335850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.946 [2024-11-20 13:23:05.416183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.946 [2024-11-20 13:23:05.416321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.517 malloc1 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.517 [2024-11-20 13:23:06.056915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:24.517 [2024-11-20 13:23:06.057314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.517 [2024-11-20 13:23:06.057433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:24.517 [2024-11-20 13:23:06.057548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.517 [2024-11-20 13:23:06.060179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.517 [2024-11-20 13:23:06.060350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:24.517 pt1 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.517 malloc2 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.517 [2024-11-20 13:23:06.088133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.517 [2024-11-20 13:23:06.088416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.517 [2024-11-20 13:23:06.088466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:24.517 [2024-11-20 13:23:06.088498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.517 [2024-11-20 13:23:06.090965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.517 [2024-11-20 13:23:06.091206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.517 pt2 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.517 malloc3 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.517 [2024-11-20 13:23:06.123061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:24.517 [2024-11-20 13:23:06.123312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.517 [2024-11-20 13:23:06.123397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:24.517 [2024-11-20 13:23:06.123483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.517 [2024-11-20 13:23:06.126017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.517 [2024-11-20 13:23:06.126193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:24.517 pt3 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.517 [2024-11-20 13:23:06.135122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:24.517 [2024-11-20 13:23:06.137397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.517 [2024-11-20 13:23:06.137497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:24.517 [2024-11-20 13:23:06.137697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:24.517 [2024-11-20 13:23:06.137744] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:24.517 [2024-11-20 13:23:06.138057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:24.517 [2024-11-20 13:23:06.138263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:24.517 [2024-11-20 13:23:06.138310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:24.517 [2024-11-20 13:23:06.138510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.517 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.518 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.518 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.518 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.518 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.518 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.518 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.518 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.518 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.518 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.518 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.778 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.778 "name": "raid_bdev1", 00:09:24.778 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:24.778 "strip_size_kb": 0, 00:09:24.778 "state": "online", 00:09:24.778 "raid_level": "raid1", 00:09:24.778 "superblock": true, 00:09:24.778 "num_base_bdevs": 3, 00:09:24.778 "num_base_bdevs_discovered": 3, 00:09:24.778 "num_base_bdevs_operational": 3, 00:09:24.778 "base_bdevs_list": [ 00:09:24.778 { 00:09:24.778 "name": "pt1", 00:09:24.778 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.778 "is_configured": true, 00:09:24.778 "data_offset": 2048, 00:09:24.778 "data_size": 63488 00:09:24.778 }, 00:09:24.778 { 00:09:24.778 "name": "pt2", 00:09:24.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.778 "is_configured": true, 00:09:24.778 "data_offset": 2048, 00:09:24.778 "data_size": 63488 00:09:24.778 }, 00:09:24.778 { 00:09:24.778 "name": "pt3", 00:09:24.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.778 "is_configured": true, 00:09:24.778 "data_offset": 2048, 00:09:24.778 "data_size": 63488 00:09:24.778 } 00:09:24.778 ] 00:09:24.778 }' 00:09:24.778 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.778 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.038 [2024-11-20 13:23:06.606720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.038 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.038 "name": "raid_bdev1", 00:09:25.038 "aliases": [ 00:09:25.038 "aa35c115-c5c9-4fac-ad9c-10753470e5a8" 00:09:25.038 ], 00:09:25.038 "product_name": "Raid Volume", 00:09:25.038 "block_size": 512, 00:09:25.038 "num_blocks": 63488, 00:09:25.038 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:25.038 "assigned_rate_limits": { 00:09:25.038 "rw_ios_per_sec": 0, 00:09:25.038 "rw_mbytes_per_sec": 0, 00:09:25.038 "r_mbytes_per_sec": 0, 00:09:25.038 "w_mbytes_per_sec": 0 00:09:25.038 }, 00:09:25.038 "claimed": false, 00:09:25.038 "zoned": false, 00:09:25.038 "supported_io_types": { 00:09:25.038 "read": true, 00:09:25.038 "write": true, 00:09:25.038 "unmap": false, 00:09:25.038 "flush": false, 00:09:25.038 "reset": true, 00:09:25.038 "nvme_admin": false, 00:09:25.038 "nvme_io": false, 00:09:25.038 "nvme_io_md": false, 00:09:25.038 "write_zeroes": true, 00:09:25.038 "zcopy": false, 00:09:25.038 "get_zone_info": false, 00:09:25.038 "zone_management": false, 00:09:25.038 "zone_append": false, 00:09:25.038 "compare": false, 00:09:25.038 "compare_and_write": false, 00:09:25.038 "abort": false, 00:09:25.038 "seek_hole": false, 00:09:25.038 "seek_data": false, 00:09:25.038 "copy": false, 00:09:25.038 "nvme_iov_md": false 00:09:25.038 }, 00:09:25.038 "memory_domains": [ 00:09:25.038 { 00:09:25.038 "dma_device_id": "system", 00:09:25.038 "dma_device_type": 1 00:09:25.038 }, 00:09:25.038 { 00:09:25.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.038 "dma_device_type": 2 00:09:25.038 }, 00:09:25.038 { 00:09:25.038 "dma_device_id": "system", 00:09:25.038 "dma_device_type": 1 00:09:25.038 }, 00:09:25.038 { 00:09:25.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.038 "dma_device_type": 2 00:09:25.038 }, 00:09:25.038 { 00:09:25.038 "dma_device_id": "system", 00:09:25.038 "dma_device_type": 1 00:09:25.038 }, 00:09:25.038 { 00:09:25.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.038 "dma_device_type": 2 00:09:25.038 } 00:09:25.038 ], 00:09:25.038 "driver_specific": { 00:09:25.038 "raid": { 00:09:25.038 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:25.038 "strip_size_kb": 0, 00:09:25.038 "state": "online", 00:09:25.038 "raid_level": "raid1", 00:09:25.038 "superblock": true, 00:09:25.038 "num_base_bdevs": 3, 00:09:25.038 "num_base_bdevs_discovered": 3, 00:09:25.038 "num_base_bdevs_operational": 3, 00:09:25.038 "base_bdevs_list": [ 00:09:25.038 { 00:09:25.039 "name": "pt1", 00:09:25.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.039 "is_configured": true, 00:09:25.039 "data_offset": 2048, 00:09:25.039 "data_size": 63488 00:09:25.039 }, 00:09:25.039 { 00:09:25.039 "name": "pt2", 00:09:25.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.039 "is_configured": true, 00:09:25.039 "data_offset": 2048, 00:09:25.039 "data_size": 63488 00:09:25.039 }, 00:09:25.039 { 00:09:25.039 "name": "pt3", 00:09:25.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.039 "is_configured": true, 00:09:25.039 "data_offset": 2048, 00:09:25.039 "data_size": 63488 00:09:25.039 } 00:09:25.039 ] 00:09:25.039 } 00:09:25.039 } 00:09:25.039 }' 00:09:25.039 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.039 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:25.039 pt2 00:09:25.039 pt3' 00:09:25.039 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.039 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.039 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.039 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:25.039 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.039 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.039 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.299 [2024-11-20 13:23:06.866113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aa35c115-c5c9-4fac-ad9c-10753470e5a8 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aa35c115-c5c9-4fac-ad9c-10753470e5a8 ']' 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.299 [2024-11-20 13:23:06.909765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.299 [2024-11-20 13:23:06.909845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.299 [2024-11-20 13:23:06.909963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.299 [2024-11-20 13:23:06.910080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.299 [2024-11-20 13:23:06.910137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:25.299 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.559 13:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.559 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.559 [2024-11-20 13:23:07.057541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:25.559 [2024-11-20 13:23:07.059849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:25.559 [2024-11-20 13:23:07.059944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:25.559 [2024-11-20 13:23:07.060034] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:25.560 [2024-11-20 13:23:07.060605] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:25.560 [2024-11-20 13:23:07.060794] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:25.560 [2024-11-20 13:23:07.060907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.560 [2024-11-20 13:23:07.060949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:25.560 request: 00:09:25.560 { 00:09:25.560 "name": "raid_bdev1", 00:09:25.560 "raid_level": "raid1", 00:09:25.560 "base_bdevs": [ 00:09:25.560 "malloc1", 00:09:25.560 "malloc2", 00:09:25.560 "malloc3" 00:09:25.560 ], 00:09:25.560 "superblock": false, 00:09:25.560 "method": "bdev_raid_create", 00:09:25.560 "req_id": 1 00:09:25.560 } 00:09:25.560 Got JSON-RPC error response 00:09:25.560 response: 00:09:25.560 { 00:09:25.560 "code": -17, 00:09:25.560 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:25.560 } 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.560 [2024-11-20 13:23:07.125466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:25.560 [2024-11-20 13:23:07.125661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.560 [2024-11-20 13:23:07.125751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:25.560 [2024-11-20 13:23:07.125826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.560 [2024-11-20 13:23:07.128666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.560 [2024-11-20 13:23:07.128834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:25.560 [2024-11-20 13:23:07.128984] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:25.560 [2024-11-20 13:23:07.129073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:25.560 pt1 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.560 "name": "raid_bdev1", 00:09:25.560 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:25.560 "strip_size_kb": 0, 00:09:25.560 "state": "configuring", 00:09:25.560 "raid_level": "raid1", 00:09:25.560 "superblock": true, 00:09:25.560 "num_base_bdevs": 3, 00:09:25.560 "num_base_bdevs_discovered": 1, 00:09:25.560 "num_base_bdevs_operational": 3, 00:09:25.560 "base_bdevs_list": [ 00:09:25.560 { 00:09:25.560 "name": "pt1", 00:09:25.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.560 "is_configured": true, 00:09:25.560 "data_offset": 2048, 00:09:25.560 "data_size": 63488 00:09:25.560 }, 00:09:25.560 { 00:09:25.560 "name": null, 00:09:25.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.560 "is_configured": false, 00:09:25.560 "data_offset": 2048, 00:09:25.560 "data_size": 63488 00:09:25.560 }, 00:09:25.560 { 00:09:25.560 "name": null, 00:09:25.560 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.560 "is_configured": false, 00:09:25.560 "data_offset": 2048, 00:09:25.560 "data_size": 63488 00:09:25.560 } 00:09:25.560 ] 00:09:25.560 }' 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.560 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.130 [2024-11-20 13:23:07.528858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.130 [2024-11-20 13:23:07.529161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.130 [2024-11-20 13:23:07.529300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:26.130 [2024-11-20 13:23:07.529366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.130 [2024-11-20 13:23:07.529892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.130 [2024-11-20 13:23:07.530053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.130 [2024-11-20 13:23:07.530259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:26.130 [2024-11-20 13:23:07.530322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.130 pt2 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.130 [2024-11-20 13:23:07.540831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.130 "name": "raid_bdev1", 00:09:26.130 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:26.130 "strip_size_kb": 0, 00:09:26.130 "state": "configuring", 00:09:26.130 "raid_level": "raid1", 00:09:26.130 "superblock": true, 00:09:26.130 "num_base_bdevs": 3, 00:09:26.130 "num_base_bdevs_discovered": 1, 00:09:26.130 "num_base_bdevs_operational": 3, 00:09:26.130 "base_bdevs_list": [ 00:09:26.130 { 00:09:26.130 "name": "pt1", 00:09:26.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.130 "is_configured": true, 00:09:26.130 "data_offset": 2048, 00:09:26.130 "data_size": 63488 00:09:26.130 }, 00:09:26.130 { 00:09:26.130 "name": null, 00:09:26.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.130 "is_configured": false, 00:09:26.130 "data_offset": 0, 00:09:26.130 "data_size": 63488 00:09:26.130 }, 00:09:26.130 { 00:09:26.130 "name": null, 00:09:26.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.130 "is_configured": false, 00:09:26.130 "data_offset": 2048, 00:09:26.130 "data_size": 63488 00:09:26.130 } 00:09:26.130 ] 00:09:26.130 }' 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.130 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.391 [2024-11-20 13:23:07.956185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.391 [2024-11-20 13:23:07.956498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.391 [2024-11-20 13:23:07.956611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:26.391 [2024-11-20 13:23:07.956708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.391 [2024-11-20 13:23:07.957329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.391 [2024-11-20 13:23:07.957487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.391 [2024-11-20 13:23:07.957675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:26.391 [2024-11-20 13:23:07.957736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.391 pt2 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.391 [2024-11-20 13:23:07.968133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:26.391 [2024-11-20 13:23:07.968288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.391 [2024-11-20 13:23:07.968366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:26.391 [2024-11-20 13:23:07.968447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.391 [2024-11-20 13:23:07.968947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.391 [2024-11-20 13:23:07.969106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:26.391 [2024-11-20 13:23:07.969258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:26.391 [2024-11-20 13:23:07.969326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:26.391 [2024-11-20 13:23:07.969458] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:26.391 [2024-11-20 13:23:07.969507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:26.391 [2024-11-20 13:23:07.969783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:26.391 [2024-11-20 13:23:07.969939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:26.391 [2024-11-20 13:23:07.969980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:26.391 [2024-11-20 13:23:07.970136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.391 pt3 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.391 13:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.391 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.391 "name": "raid_bdev1", 00:09:26.391 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:26.391 "strip_size_kb": 0, 00:09:26.391 "state": "online", 00:09:26.391 "raid_level": "raid1", 00:09:26.391 "superblock": true, 00:09:26.391 "num_base_bdevs": 3, 00:09:26.391 "num_base_bdevs_discovered": 3, 00:09:26.391 "num_base_bdevs_operational": 3, 00:09:26.391 "base_bdevs_list": [ 00:09:26.391 { 00:09:26.391 "name": "pt1", 00:09:26.391 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.391 "is_configured": true, 00:09:26.391 "data_offset": 2048, 00:09:26.391 "data_size": 63488 00:09:26.391 }, 00:09:26.391 { 00:09:26.391 "name": "pt2", 00:09:26.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.391 "is_configured": true, 00:09:26.391 "data_offset": 2048, 00:09:26.391 "data_size": 63488 00:09:26.391 }, 00:09:26.391 { 00:09:26.391 "name": "pt3", 00:09:26.391 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.391 "is_configured": true, 00:09:26.391 "data_offset": 2048, 00:09:26.391 "data_size": 63488 00:09:26.391 } 00:09:26.391 ] 00:09:26.391 }' 00:09:26.391 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.391 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.961 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:26.961 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:26.961 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.961 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.961 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.961 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.961 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.961 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:26.961 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.961 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.961 [2024-11-20 13:23:08.395814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.961 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.962 "name": "raid_bdev1", 00:09:26.962 "aliases": [ 00:09:26.962 "aa35c115-c5c9-4fac-ad9c-10753470e5a8" 00:09:26.962 ], 00:09:26.962 "product_name": "Raid Volume", 00:09:26.962 "block_size": 512, 00:09:26.962 "num_blocks": 63488, 00:09:26.962 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:26.962 "assigned_rate_limits": { 00:09:26.962 "rw_ios_per_sec": 0, 00:09:26.962 "rw_mbytes_per_sec": 0, 00:09:26.962 "r_mbytes_per_sec": 0, 00:09:26.962 "w_mbytes_per_sec": 0 00:09:26.962 }, 00:09:26.962 "claimed": false, 00:09:26.962 "zoned": false, 00:09:26.962 "supported_io_types": { 00:09:26.962 "read": true, 00:09:26.962 "write": true, 00:09:26.962 "unmap": false, 00:09:26.962 "flush": false, 00:09:26.962 "reset": true, 00:09:26.962 "nvme_admin": false, 00:09:26.962 "nvme_io": false, 00:09:26.962 "nvme_io_md": false, 00:09:26.962 "write_zeroes": true, 00:09:26.962 "zcopy": false, 00:09:26.962 "get_zone_info": false, 00:09:26.962 "zone_management": false, 00:09:26.962 "zone_append": false, 00:09:26.962 "compare": false, 00:09:26.962 "compare_and_write": false, 00:09:26.962 "abort": false, 00:09:26.962 "seek_hole": false, 00:09:26.962 "seek_data": false, 00:09:26.962 "copy": false, 00:09:26.962 "nvme_iov_md": false 00:09:26.962 }, 00:09:26.962 "memory_domains": [ 00:09:26.962 { 00:09:26.962 "dma_device_id": "system", 00:09:26.962 "dma_device_type": 1 00:09:26.962 }, 00:09:26.962 { 00:09:26.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.962 "dma_device_type": 2 00:09:26.962 }, 00:09:26.962 { 00:09:26.962 "dma_device_id": "system", 00:09:26.962 "dma_device_type": 1 00:09:26.962 }, 00:09:26.962 { 00:09:26.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.962 "dma_device_type": 2 00:09:26.962 }, 00:09:26.962 { 00:09:26.962 "dma_device_id": "system", 00:09:26.962 "dma_device_type": 1 00:09:26.962 }, 00:09:26.962 { 00:09:26.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.962 "dma_device_type": 2 00:09:26.962 } 00:09:26.962 ], 00:09:26.962 "driver_specific": { 00:09:26.962 "raid": { 00:09:26.962 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:26.962 "strip_size_kb": 0, 00:09:26.962 "state": "online", 00:09:26.962 "raid_level": "raid1", 00:09:26.962 "superblock": true, 00:09:26.962 "num_base_bdevs": 3, 00:09:26.962 "num_base_bdevs_discovered": 3, 00:09:26.962 "num_base_bdevs_operational": 3, 00:09:26.962 "base_bdevs_list": [ 00:09:26.962 { 00:09:26.962 "name": "pt1", 00:09:26.962 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.962 "is_configured": true, 00:09:26.962 "data_offset": 2048, 00:09:26.962 "data_size": 63488 00:09:26.962 }, 00:09:26.962 { 00:09:26.962 "name": "pt2", 00:09:26.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.962 "is_configured": true, 00:09:26.962 "data_offset": 2048, 00:09:26.962 "data_size": 63488 00:09:26.962 }, 00:09:26.962 { 00:09:26.962 "name": "pt3", 00:09:26.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.962 "is_configured": true, 00:09:26.962 "data_offset": 2048, 00:09:26.962 "data_size": 63488 00:09:26.962 } 00:09:26.962 ] 00:09:26.962 } 00:09:26.962 } 00:09:26.962 }' 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:26.962 pt2 00:09:26.962 pt3' 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.962 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 [2024-11-20 13:23:08.691177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aa35c115-c5c9-4fac-ad9c-10753470e5a8 '!=' aa35c115-c5c9-4fac-ad9c-10753470e5a8 ']' 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 [2024-11-20 13:23:08.738891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.223 "name": "raid_bdev1", 00:09:27.223 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:27.223 "strip_size_kb": 0, 00:09:27.223 "state": "online", 00:09:27.223 "raid_level": "raid1", 00:09:27.223 "superblock": true, 00:09:27.223 "num_base_bdevs": 3, 00:09:27.223 "num_base_bdevs_discovered": 2, 00:09:27.223 "num_base_bdevs_operational": 2, 00:09:27.223 "base_bdevs_list": [ 00:09:27.223 { 00:09:27.223 "name": null, 00:09:27.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.223 "is_configured": false, 00:09:27.223 "data_offset": 0, 00:09:27.223 "data_size": 63488 00:09:27.223 }, 00:09:27.223 { 00:09:27.223 "name": "pt2", 00:09:27.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.223 "is_configured": true, 00:09:27.223 "data_offset": 2048, 00:09:27.223 "data_size": 63488 00:09:27.223 }, 00:09:27.223 { 00:09:27.223 "name": "pt3", 00:09:27.223 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.223 "is_configured": true, 00:09:27.223 "data_offset": 2048, 00:09:27.223 "data_size": 63488 00:09:27.223 } 00:09:27.223 ] 00:09:27.223 }' 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.223 13:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.483 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:27.483 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.483 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.483 [2024-11-20 13:23:09.142156] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.483 [2024-11-20 13:23:09.142237] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.483 [2024-11-20 13:23:09.142334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.483 [2024-11-20 13:23:09.142421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.483 [2024-11-20 13:23:09.142479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:27.483 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.742 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.742 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:27.742 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.742 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.742 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.742 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:27.742 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:27.742 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:27.742 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:27.742 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:27.742 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.743 [2024-11-20 13:23:09.218066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:27.743 [2024-11-20 13:23:09.218516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.743 [2024-11-20 13:23:09.218583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:27.743 [2024-11-20 13:23:09.218616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.743 [2024-11-20 13:23:09.220963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.743 [2024-11-20 13:23:09.221108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:27.743 [2024-11-20 13:23:09.221268] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:27.743 [2024-11-20 13:23:09.221344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:27.743 pt2 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.743 "name": "raid_bdev1", 00:09:27.743 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:27.743 "strip_size_kb": 0, 00:09:27.743 "state": "configuring", 00:09:27.743 "raid_level": "raid1", 00:09:27.743 "superblock": true, 00:09:27.743 "num_base_bdevs": 3, 00:09:27.743 "num_base_bdevs_discovered": 1, 00:09:27.743 "num_base_bdevs_operational": 2, 00:09:27.743 "base_bdevs_list": [ 00:09:27.743 { 00:09:27.743 "name": null, 00:09:27.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.743 "is_configured": false, 00:09:27.743 "data_offset": 2048, 00:09:27.743 "data_size": 63488 00:09:27.743 }, 00:09:27.743 { 00:09:27.743 "name": "pt2", 00:09:27.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.743 "is_configured": true, 00:09:27.743 "data_offset": 2048, 00:09:27.743 "data_size": 63488 00:09:27.743 }, 00:09:27.743 { 00:09:27.743 "name": null, 00:09:27.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.743 "is_configured": false, 00:09:27.743 "data_offset": 2048, 00:09:27.743 "data_size": 63488 00:09:27.743 } 00:09:27.743 ] 00:09:27.743 }' 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.743 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.002 [2024-11-20 13:23:09.629608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:28.002 [2024-11-20 13:23:09.630333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.002 [2024-11-20 13:23:09.630658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:28.002 [2024-11-20 13:23:09.630913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.002 [2024-11-20 13:23:09.632311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.002 [2024-11-20 13:23:09.632648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:28.002 [2024-11-20 13:23:09.633125] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:28.002 [2024-11-20 13:23:09.633312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:28.002 [2024-11-20 13:23:09.633680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:28.002 [2024-11-20 13:23:09.633787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:28.002 [2024-11-20 13:23:09.634633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:28.002 [2024-11-20 13:23:09.635134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:28.002 pt3 00:09:28.002 [2024-11-20 13:23:09.635274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:28.002 [2024-11-20 13:23:09.635657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.002 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.003 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.003 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.003 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.003 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.003 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.003 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.003 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.003 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.003 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.262 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.262 "name": "raid_bdev1", 00:09:28.262 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:28.262 "strip_size_kb": 0, 00:09:28.262 "state": "online", 00:09:28.262 "raid_level": "raid1", 00:09:28.262 "superblock": true, 00:09:28.262 "num_base_bdevs": 3, 00:09:28.262 "num_base_bdevs_discovered": 2, 00:09:28.262 "num_base_bdevs_operational": 2, 00:09:28.262 "base_bdevs_list": [ 00:09:28.262 { 00:09:28.262 "name": null, 00:09:28.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.262 "is_configured": false, 00:09:28.262 "data_offset": 2048, 00:09:28.262 "data_size": 63488 00:09:28.262 }, 00:09:28.262 { 00:09:28.262 "name": "pt2", 00:09:28.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.262 "is_configured": true, 00:09:28.262 "data_offset": 2048, 00:09:28.262 "data_size": 63488 00:09:28.262 }, 00:09:28.262 { 00:09:28.262 "name": "pt3", 00:09:28.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.262 "is_configured": true, 00:09:28.262 "data_offset": 2048, 00:09:28.262 "data_size": 63488 00:09:28.262 } 00:09:28.262 ] 00:09:28.262 }' 00:09:28.262 13:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.262 13:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.521 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:28.521 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.521 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.521 [2024-11-20 13:23:10.037079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:28.522 [2024-11-20 13:23:10.037185] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.522 [2024-11-20 13:23:10.037292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.522 [2024-11-20 13:23:10.037375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.522 [2024-11-20 13:23:10.037424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.522 [2024-11-20 13:23:10.104957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:28.522 [2024-11-20 13:23:10.105276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.522 [2024-11-20 13:23:10.105378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:28.522 [2024-11-20 13:23:10.105462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.522 [2024-11-20 13:23:10.108141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.522 [2024-11-20 13:23:10.108224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:28.522 [2024-11-20 13:23:10.108341] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:28.522 [2024-11-20 13:23:10.108423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:28.522 [2024-11-20 13:23:10.108603] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:28.522 [2024-11-20 13:23:10.108675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:28.522 [2024-11-20 13:23:10.108758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:09:28.522 [2024-11-20 13:23:10.108885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:28.522 pt1 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.522 "name": "raid_bdev1", 00:09:28.522 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:28.522 "strip_size_kb": 0, 00:09:28.522 "state": "configuring", 00:09:28.522 "raid_level": "raid1", 00:09:28.522 "superblock": true, 00:09:28.522 "num_base_bdevs": 3, 00:09:28.522 "num_base_bdevs_discovered": 1, 00:09:28.522 "num_base_bdevs_operational": 2, 00:09:28.522 "base_bdevs_list": [ 00:09:28.522 { 00:09:28.522 "name": null, 00:09:28.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.522 "is_configured": false, 00:09:28.522 "data_offset": 2048, 00:09:28.522 "data_size": 63488 00:09:28.522 }, 00:09:28.522 { 00:09:28.522 "name": "pt2", 00:09:28.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.522 "is_configured": true, 00:09:28.522 "data_offset": 2048, 00:09:28.522 "data_size": 63488 00:09:28.522 }, 00:09:28.522 { 00:09:28.522 "name": null, 00:09:28.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.522 "is_configured": false, 00:09:28.522 "data_offset": 2048, 00:09:28.522 "data_size": 63488 00:09:28.522 } 00:09:28.522 ] 00:09:28.522 }' 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.522 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.092 [2024-11-20 13:23:10.584296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:29.092 [2024-11-20 13:23:10.584479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.092 [2024-11-20 13:23:10.584525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:29.092 [2024-11-20 13:23:10.584568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.092 [2024-11-20 13:23:10.585151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.092 [2024-11-20 13:23:10.585225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:29.092 [2024-11-20 13:23:10.585352] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:29.092 [2024-11-20 13:23:10.585420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:29.092 [2024-11-20 13:23:10.585574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:09:29.092 [2024-11-20 13:23:10.585622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:29.092 [2024-11-20 13:23:10.585919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:29.092 [2024-11-20 13:23:10.586132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:09:29.092 [2024-11-20 13:23:10.586177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:09:29.092 [2024-11-20 13:23:10.586336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.092 pt3 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.092 "name": "raid_bdev1", 00:09:29.092 "uuid": "aa35c115-c5c9-4fac-ad9c-10753470e5a8", 00:09:29.092 "strip_size_kb": 0, 00:09:29.092 "state": "online", 00:09:29.092 "raid_level": "raid1", 00:09:29.092 "superblock": true, 00:09:29.092 "num_base_bdevs": 3, 00:09:29.092 "num_base_bdevs_discovered": 2, 00:09:29.092 "num_base_bdevs_operational": 2, 00:09:29.092 "base_bdevs_list": [ 00:09:29.092 { 00:09:29.092 "name": null, 00:09:29.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.092 "is_configured": false, 00:09:29.092 "data_offset": 2048, 00:09:29.092 "data_size": 63488 00:09:29.092 }, 00:09:29.092 { 00:09:29.092 "name": "pt2", 00:09:29.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.092 "is_configured": true, 00:09:29.092 "data_offset": 2048, 00:09:29.092 "data_size": 63488 00:09:29.092 }, 00:09:29.092 { 00:09:29.092 "name": "pt3", 00:09:29.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.092 "is_configured": true, 00:09:29.092 "data_offset": 2048, 00:09:29.092 "data_size": 63488 00:09:29.092 } 00:09:29.092 ] 00:09:29.092 }' 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.092 13:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.661 13:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:29.661 13:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:29.661 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.661 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.661 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.661 13:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:29.661 13:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:29.661 13:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.661 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.661 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.661 [2024-11-20 13:23:11.075836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' aa35c115-c5c9-4fac-ad9c-10753470e5a8 '!=' aa35c115-c5c9-4fac-ad9c-10753470e5a8 ']' 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79361 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 79361 ']' 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 79361 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79361 00:09:29.662 killing process with pid 79361 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79361' 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 79361 00:09:29.662 [2024-11-20 13:23:11.133676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.662 [2024-11-20 13:23:11.133779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.662 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 79361 00:09:29.662 [2024-11-20 13:23:11.133851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.662 [2024-11-20 13:23:11.133862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:09:29.662 [2024-11-20 13:23:11.196905] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.922 13:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:29.922 00:09:29.922 real 0m6.449s 00:09:29.922 user 0m10.636s 00:09:29.922 sys 0m1.395s 00:09:29.922 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.922 13:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.922 ************************************ 00:09:29.922 END TEST raid_superblock_test 00:09:29.922 ************************************ 00:09:29.922 13:23:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:29.922 13:23:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:29.922 13:23:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.922 13:23:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.182 ************************************ 00:09:30.182 START TEST raid_read_error_test 00:09:30.182 ************************************ 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:30.182 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sErmyFoexE 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79795 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79795 00:09:30.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 79795 ']' 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.183 13:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.183 [2024-11-20 13:23:11.694158] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:30.183 [2024-11-20 13:23:11.694279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79795 ] 00:09:30.441 [2024-11-20 13:23:11.850541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.441 [2024-11-20 13:23:11.891870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.441 [2024-11-20 13:23:11.970058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.441 [2024-11-20 13:23:11.970189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.011 BaseBdev1_malloc 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.011 true 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.011 [2024-11-20 13:23:12.589712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:31.011 [2024-11-20 13:23:12.589883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.011 [2024-11-20 13:23:12.589936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:31.011 [2024-11-20 13:23:12.589969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.011 [2024-11-20 13:23:12.592583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.011 [2024-11-20 13:23:12.592661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:31.011 BaseBdev1 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.011 BaseBdev2_malloc 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.011 true 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.011 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.011 [2024-11-20 13:23:12.637228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:31.011 [2024-11-20 13:23:12.637357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.011 [2024-11-20 13:23:12.637401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:31.011 [2024-11-20 13:23:12.637443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.011 [2024-11-20 13:23:12.639999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.011 [2024-11-20 13:23:12.640098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:31.011 BaseBdev2 00:09:31.012 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.012 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.012 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:31.012 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.012 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.012 BaseBdev3_malloc 00:09:31.012 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.012 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:31.012 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.012 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.012 true 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.272 [2024-11-20 13:23:12.685011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:31.272 [2024-11-20 13:23:12.685167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.272 [2024-11-20 13:23:12.685210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:31.272 [2024-11-20 13:23:12.685239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.272 [2024-11-20 13:23:12.688012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.272 [2024-11-20 13:23:12.688118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:31.272 BaseBdev3 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.272 [2024-11-20 13:23:12.697103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.272 [2024-11-20 13:23:12.699579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.272 [2024-11-20 13:23:12.699733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.272 [2024-11-20 13:23:12.699982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:31.272 [2024-11-20 13:23:12.700058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:31.272 [2024-11-20 13:23:12.700394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:31.272 [2024-11-20 13:23:12.700603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:31.272 [2024-11-20 13:23:12.700659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:31.272 [2024-11-20 13:23:12.700878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.272 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.273 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.273 "name": "raid_bdev1", 00:09:31.273 "uuid": "abc58683-cde5-4543-8d78-6a169f2bb236", 00:09:31.273 "strip_size_kb": 0, 00:09:31.273 "state": "online", 00:09:31.273 "raid_level": "raid1", 00:09:31.273 "superblock": true, 00:09:31.273 "num_base_bdevs": 3, 00:09:31.273 "num_base_bdevs_discovered": 3, 00:09:31.273 "num_base_bdevs_operational": 3, 00:09:31.273 "base_bdevs_list": [ 00:09:31.273 { 00:09:31.273 "name": "BaseBdev1", 00:09:31.273 "uuid": "eebba0a9-2d08-5a6a-9816-f52bca4d415f", 00:09:31.273 "is_configured": true, 00:09:31.273 "data_offset": 2048, 00:09:31.273 "data_size": 63488 00:09:31.273 }, 00:09:31.273 { 00:09:31.273 "name": "BaseBdev2", 00:09:31.273 "uuid": "1334c838-da50-51e9-8ed1-c337718cb0d8", 00:09:31.273 "is_configured": true, 00:09:31.273 "data_offset": 2048, 00:09:31.273 "data_size": 63488 00:09:31.273 }, 00:09:31.273 { 00:09:31.273 "name": "BaseBdev3", 00:09:31.273 "uuid": "c4b4044c-72c6-5cd3-a43c-af84627eba0b", 00:09:31.273 "is_configured": true, 00:09:31.273 "data_offset": 2048, 00:09:31.273 "data_size": 63488 00:09:31.273 } 00:09:31.273 ] 00:09:31.273 }' 00:09:31.273 13:23:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.273 13:23:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.533 13:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:31.533 13:23:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:31.792 [2024-11-20 13:23:13.224776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.731 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.731 "name": "raid_bdev1", 00:09:32.731 "uuid": "abc58683-cde5-4543-8d78-6a169f2bb236", 00:09:32.731 "strip_size_kb": 0, 00:09:32.731 "state": "online", 00:09:32.731 "raid_level": "raid1", 00:09:32.731 "superblock": true, 00:09:32.731 "num_base_bdevs": 3, 00:09:32.731 "num_base_bdevs_discovered": 3, 00:09:32.731 "num_base_bdevs_operational": 3, 00:09:32.731 "base_bdevs_list": [ 00:09:32.731 { 00:09:32.731 "name": "BaseBdev1", 00:09:32.731 "uuid": "eebba0a9-2d08-5a6a-9816-f52bca4d415f", 00:09:32.731 "is_configured": true, 00:09:32.731 "data_offset": 2048, 00:09:32.731 "data_size": 63488 00:09:32.731 }, 00:09:32.731 { 00:09:32.731 "name": "BaseBdev2", 00:09:32.732 "uuid": "1334c838-da50-51e9-8ed1-c337718cb0d8", 00:09:32.732 "is_configured": true, 00:09:32.732 "data_offset": 2048, 00:09:32.732 "data_size": 63488 00:09:32.732 }, 00:09:32.732 { 00:09:32.732 "name": "BaseBdev3", 00:09:32.732 "uuid": "c4b4044c-72c6-5cd3-a43c-af84627eba0b", 00:09:32.732 "is_configured": true, 00:09:32.732 "data_offset": 2048, 00:09:32.732 "data_size": 63488 00:09:32.732 } 00:09:32.732 ] 00:09:32.732 }' 00:09:32.732 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.732 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.992 [2024-11-20 13:23:14.598110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:32.992 [2024-11-20 13:23:14.598251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.992 [2024-11-20 13:23:14.601356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.992 [2024-11-20 13:23:14.601463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.992 [2024-11-20 13:23:14.601612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.992 [2024-11-20 13:23:14.601672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:32.992 { 00:09:32.992 "results": [ 00:09:32.992 { 00:09:32.992 "job": "raid_bdev1", 00:09:32.992 "core_mask": "0x1", 00:09:32.992 "workload": "randrw", 00:09:32.992 "percentage": 50, 00:09:32.992 "status": "finished", 00:09:32.992 "queue_depth": 1, 00:09:32.992 "io_size": 131072, 00:09:32.992 "runtime": 1.373917, 00:09:32.992 "iops": 9806.997074786905, 00:09:32.992 "mibps": 1225.874634348363, 00:09:32.992 "io_failed": 0, 00:09:32.992 "io_timeout": 0, 00:09:32.992 "avg_latency_us": 99.22141183440468, 00:09:32.992 "min_latency_us": 23.14061135371179, 00:09:32.992 "max_latency_us": 1681.3275109170306 00:09:32.992 } 00:09:32.992 ], 00:09:32.992 "core_count": 1 00:09:32.992 } 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79795 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 79795 ']' 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 79795 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79795 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79795' 00:09:32.992 killing process with pid 79795 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 79795 00:09:32.992 [2024-11-20 13:23:14.645795] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.992 13:23:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 79795 00:09:33.252 [2024-11-20 13:23:14.696933] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.513 13:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:33.513 13:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sErmyFoexE 00:09:33.513 13:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:33.513 13:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:33.513 13:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:33.513 13:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.513 13:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:33.513 13:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:33.513 00:09:33.513 real 0m3.436s 00:09:33.513 user 0m4.257s 00:09:33.513 sys 0m0.607s 00:09:33.513 13:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.513 13:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.513 ************************************ 00:09:33.513 END TEST raid_read_error_test 00:09:33.513 ************************************ 00:09:33.513 13:23:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:33.513 13:23:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:33.513 13:23:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.513 13:23:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.513 ************************************ 00:09:33.513 START TEST raid_write_error_test 00:09:33.513 ************************************ 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sJ6NPBCDun 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79930 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79930 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 79930 ']' 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.513 13:23:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.773 [2024-11-20 13:23:15.204934] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:33.773 [2024-11-20 13:23:15.205164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79930 ] 00:09:33.773 [2024-11-20 13:23:15.358764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.774 [2024-11-20 13:23:15.398731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.033 [2024-11-20 13:23:15.476139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.033 [2024-11-20 13:23:15.476201] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.634 BaseBdev1_malloc 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.634 true 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.634 [2024-11-20 13:23:16.073650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:34.634 [2024-11-20 13:23:16.073762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.634 [2024-11-20 13:23:16.073813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:34.634 [2024-11-20 13:23:16.073852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.634 [2024-11-20 13:23:16.076052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.634 [2024-11-20 13:23:16.076139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:34.634 BaseBdev1 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.634 BaseBdev2_malloc 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.634 true 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.634 [2024-11-20 13:23:16.114688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:34.634 [2024-11-20 13:23:16.114743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.634 [2024-11-20 13:23:16.114763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:34.634 [2024-11-20 13:23:16.114782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.634 [2024-11-20 13:23:16.117245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.634 [2024-11-20 13:23:16.117294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:34.634 BaseBdev2 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.634 BaseBdev3_malloc 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.634 true 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.634 [2024-11-20 13:23:16.155618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:34.634 [2024-11-20 13:23:16.155725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.634 [2024-11-20 13:23:16.155753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:34.634 [2024-11-20 13:23:16.155764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.634 [2024-11-20 13:23:16.157972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.634 [2024-11-20 13:23:16.158037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:34.634 BaseBdev3 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.634 [2024-11-20 13:23:16.167678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.634 [2024-11-20 13:23:16.169584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.634 [2024-11-20 13:23:16.169716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.634 [2024-11-20 13:23:16.169959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:34.634 [2024-11-20 13:23:16.170052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:34.634 [2024-11-20 13:23:16.170345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:09:34.634 [2024-11-20 13:23:16.170560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:34.634 [2024-11-20 13:23:16.170612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:34.634 [2024-11-20 13:23:16.170786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.634 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.634 "name": "raid_bdev1", 00:09:34.634 "uuid": "30650206-9871-4e20-832e-ae78796885aa", 00:09:34.634 "strip_size_kb": 0, 00:09:34.634 "state": "online", 00:09:34.634 "raid_level": "raid1", 00:09:34.634 "superblock": true, 00:09:34.634 "num_base_bdevs": 3, 00:09:34.634 "num_base_bdevs_discovered": 3, 00:09:34.634 "num_base_bdevs_operational": 3, 00:09:34.634 "base_bdevs_list": [ 00:09:34.634 { 00:09:34.634 "name": "BaseBdev1", 00:09:34.634 "uuid": "52c1b50e-51bd-57e0-9761-73c45c411ec2", 00:09:34.634 "is_configured": true, 00:09:34.634 "data_offset": 2048, 00:09:34.634 "data_size": 63488 00:09:34.634 }, 00:09:34.634 { 00:09:34.634 "name": "BaseBdev2", 00:09:34.634 "uuid": "ca482447-db16-5557-8643-f52a0e4ed96f", 00:09:34.634 "is_configured": true, 00:09:34.634 "data_offset": 2048, 00:09:34.634 "data_size": 63488 00:09:34.634 }, 00:09:34.634 { 00:09:34.634 "name": "BaseBdev3", 00:09:34.635 "uuid": "da4dfe6e-97b9-5228-9da4-6e2735f1a6da", 00:09:34.635 "is_configured": true, 00:09:34.635 "data_offset": 2048, 00:09:34.635 "data_size": 63488 00:09:34.635 } 00:09:34.635 ] 00:09:34.635 }' 00:09:34.635 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.635 13:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.204 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:35.204 13:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:35.204 [2024-11-20 13:23:16.643517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.143 [2024-11-20 13:23:17.579160] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:36.143 [2024-11-20 13:23:17.579309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.143 [2024-11-20 13:23:17.579541] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002d50 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.143 "name": "raid_bdev1", 00:09:36.143 "uuid": "30650206-9871-4e20-832e-ae78796885aa", 00:09:36.143 "strip_size_kb": 0, 00:09:36.143 "state": "online", 00:09:36.143 "raid_level": "raid1", 00:09:36.143 "superblock": true, 00:09:36.143 "num_base_bdevs": 3, 00:09:36.143 "num_base_bdevs_discovered": 2, 00:09:36.143 "num_base_bdevs_operational": 2, 00:09:36.143 "base_bdevs_list": [ 00:09:36.143 { 00:09:36.143 "name": null, 00:09:36.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.143 "is_configured": false, 00:09:36.143 "data_offset": 0, 00:09:36.143 "data_size": 63488 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "name": "BaseBdev2", 00:09:36.143 "uuid": "ca482447-db16-5557-8643-f52a0e4ed96f", 00:09:36.143 "is_configured": true, 00:09:36.143 "data_offset": 2048, 00:09:36.143 "data_size": 63488 00:09:36.143 }, 00:09:36.143 { 00:09:36.143 "name": "BaseBdev3", 00:09:36.143 "uuid": "da4dfe6e-97b9-5228-9da4-6e2735f1a6da", 00:09:36.143 "is_configured": true, 00:09:36.143 "data_offset": 2048, 00:09:36.143 "data_size": 63488 00:09:36.143 } 00:09:36.143 ] 00:09:36.143 }' 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.143 13:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.404 13:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:36.404 13:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.404 13:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.404 [2024-11-20 13:23:17.997313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.404 [2024-11-20 13:23:17.997418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.404 [2024-11-20 13:23:17.999935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.404 [2024-11-20 13:23:18.000047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.404 [2024-11-20 13:23:18.000160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.404 [2024-11-20 13:23:18.000224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:36.404 { 00:09:36.404 "results": [ 00:09:36.404 { 00:09:36.404 "job": "raid_bdev1", 00:09:36.404 "core_mask": "0x1", 00:09:36.404 "workload": "randrw", 00:09:36.404 "percentage": 50, 00:09:36.404 "status": "finished", 00:09:36.404 "queue_depth": 1, 00:09:36.404 "io_size": 131072, 00:09:36.404 "runtime": 1.354444, 00:09:36.404 "iops": 15502.302051616753, 00:09:36.404 "mibps": 1937.787756452094, 00:09:36.404 "io_failed": 0, 00:09:36.404 "io_timeout": 0, 00:09:36.404 "avg_latency_us": 61.69485355882614, 00:09:36.404 "min_latency_us": 23.02882096069869, 00:09:36.404 "max_latency_us": 1445.2262008733624 00:09:36.404 } 00:09:36.404 ], 00:09:36.404 "core_count": 1 00:09:36.404 } 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79930 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 79930 ']' 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 79930 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79930 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79930' 00:09:36.404 killing process with pid 79930 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 79930 00:09:36.404 [2024-11-20 13:23:18.045499] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.404 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 79930 00:09:36.664 [2024-11-20 13:23:18.071502] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.664 13:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:36.664 13:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sJ6NPBCDun 00:09:36.664 13:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:36.664 13:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:36.664 13:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:36.664 13:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.664 13:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:36.664 13:23:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:36.664 00:09:36.664 real 0m3.186s 00:09:36.664 user 0m3.946s 00:09:36.664 sys 0m0.551s 00:09:36.664 ************************************ 00:09:36.664 END TEST raid_write_error_test 00:09:36.664 ************************************ 00:09:36.664 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.664 13:23:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.926 13:23:18 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:36.926 13:23:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:36.926 13:23:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:36.926 13:23:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:36.926 13:23:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.926 13:23:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.926 ************************************ 00:09:36.926 START TEST raid_state_function_test 00:09:36.926 ************************************ 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:36.926 Process raid pid: 80057 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80057 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80057' 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80057 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80057 ']' 00:09:36.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.926 13:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.926 [2024-11-20 13:23:18.452901] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:36.926 [2024-11-20 13:23:18.453051] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.186 [2024-11-20 13:23:18.607631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.186 [2024-11-20 13:23:18.635163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.186 [2024-11-20 13:23:18.678951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.186 [2024-11-20 13:23:18.679120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.760 [2024-11-20 13:23:19.281219] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.760 [2024-11-20 13:23:19.281336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.760 [2024-11-20 13:23:19.281383] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.760 [2024-11-20 13:23:19.281414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.760 [2024-11-20 13:23:19.281437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.760 [2024-11-20 13:23:19.281484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.760 [2024-11-20 13:23:19.281507] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:37.760 [2024-11-20 13:23:19.281535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.760 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.760 "name": "Existed_Raid", 00:09:37.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.761 "strip_size_kb": 64, 00:09:37.761 "state": "configuring", 00:09:37.761 "raid_level": "raid0", 00:09:37.761 "superblock": false, 00:09:37.761 "num_base_bdevs": 4, 00:09:37.761 "num_base_bdevs_discovered": 0, 00:09:37.761 "num_base_bdevs_operational": 4, 00:09:37.761 "base_bdevs_list": [ 00:09:37.761 { 00:09:37.761 "name": "BaseBdev1", 00:09:37.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.761 "is_configured": false, 00:09:37.761 "data_offset": 0, 00:09:37.761 "data_size": 0 00:09:37.761 }, 00:09:37.761 { 00:09:37.761 "name": "BaseBdev2", 00:09:37.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.761 "is_configured": false, 00:09:37.761 "data_offset": 0, 00:09:37.761 "data_size": 0 00:09:37.761 }, 00:09:37.761 { 00:09:37.761 "name": "BaseBdev3", 00:09:37.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.761 "is_configured": false, 00:09:37.761 "data_offset": 0, 00:09:37.761 "data_size": 0 00:09:37.761 }, 00:09:37.761 { 00:09:37.761 "name": "BaseBdev4", 00:09:37.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.761 "is_configured": false, 00:09:37.761 "data_offset": 0, 00:09:37.761 "data_size": 0 00:09:37.761 } 00:09:37.761 ] 00:09:37.761 }' 00:09:37.761 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.761 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.021 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.021 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.021 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.021 [2024-11-20 13:23:19.652500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.021 [2024-11-20 13:23:19.652608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:38.021 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.021 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:38.021 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.021 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.021 [2024-11-20 13:23:19.664482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.021 [2024-11-20 13:23:19.664584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.021 [2024-11-20 13:23:19.664616] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.022 [2024-11-20 13:23:19.664644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.022 [2024-11-20 13:23:19.664666] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.022 [2024-11-20 13:23:19.664692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.022 [2024-11-20 13:23:19.664714] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:38.022 [2024-11-20 13:23:19.664779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:38.022 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.022 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.022 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.022 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.022 [2024-11-20 13:23:19.685633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.022 BaseBdev1 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.282 [ 00:09:38.282 { 00:09:38.282 "name": "BaseBdev1", 00:09:38.282 "aliases": [ 00:09:38.282 "e365fed6-e3ae-46fc-b23b-fc9e9cdb02a5" 00:09:38.282 ], 00:09:38.282 "product_name": "Malloc disk", 00:09:38.282 "block_size": 512, 00:09:38.282 "num_blocks": 65536, 00:09:38.282 "uuid": "e365fed6-e3ae-46fc-b23b-fc9e9cdb02a5", 00:09:38.282 "assigned_rate_limits": { 00:09:38.282 "rw_ios_per_sec": 0, 00:09:38.282 "rw_mbytes_per_sec": 0, 00:09:38.282 "r_mbytes_per_sec": 0, 00:09:38.282 "w_mbytes_per_sec": 0 00:09:38.282 }, 00:09:38.282 "claimed": true, 00:09:38.282 "claim_type": "exclusive_write", 00:09:38.282 "zoned": false, 00:09:38.282 "supported_io_types": { 00:09:38.282 "read": true, 00:09:38.282 "write": true, 00:09:38.282 "unmap": true, 00:09:38.282 "flush": true, 00:09:38.282 "reset": true, 00:09:38.282 "nvme_admin": false, 00:09:38.282 "nvme_io": false, 00:09:38.282 "nvme_io_md": false, 00:09:38.282 "write_zeroes": true, 00:09:38.282 "zcopy": true, 00:09:38.282 "get_zone_info": false, 00:09:38.282 "zone_management": false, 00:09:38.282 "zone_append": false, 00:09:38.282 "compare": false, 00:09:38.282 "compare_and_write": false, 00:09:38.282 "abort": true, 00:09:38.282 "seek_hole": false, 00:09:38.282 "seek_data": false, 00:09:38.282 "copy": true, 00:09:38.282 "nvme_iov_md": false 00:09:38.282 }, 00:09:38.282 "memory_domains": [ 00:09:38.282 { 00:09:38.282 "dma_device_id": "system", 00:09:38.282 "dma_device_type": 1 00:09:38.282 }, 00:09:38.282 { 00:09:38.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.282 "dma_device_type": 2 00:09:38.282 } 00:09:38.282 ], 00:09:38.282 "driver_specific": {} 00:09:38.282 } 00:09:38.282 ] 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.282 "name": "Existed_Raid", 00:09:38.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.282 "strip_size_kb": 64, 00:09:38.282 "state": "configuring", 00:09:38.282 "raid_level": "raid0", 00:09:38.282 "superblock": false, 00:09:38.282 "num_base_bdevs": 4, 00:09:38.282 "num_base_bdevs_discovered": 1, 00:09:38.282 "num_base_bdevs_operational": 4, 00:09:38.282 "base_bdevs_list": [ 00:09:38.282 { 00:09:38.282 "name": "BaseBdev1", 00:09:38.282 "uuid": "e365fed6-e3ae-46fc-b23b-fc9e9cdb02a5", 00:09:38.282 "is_configured": true, 00:09:38.282 "data_offset": 0, 00:09:38.282 "data_size": 65536 00:09:38.282 }, 00:09:38.282 { 00:09:38.282 "name": "BaseBdev2", 00:09:38.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.282 "is_configured": false, 00:09:38.282 "data_offset": 0, 00:09:38.282 "data_size": 0 00:09:38.282 }, 00:09:38.282 { 00:09:38.282 "name": "BaseBdev3", 00:09:38.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.282 "is_configured": false, 00:09:38.282 "data_offset": 0, 00:09:38.282 "data_size": 0 00:09:38.282 }, 00:09:38.282 { 00:09:38.282 "name": "BaseBdev4", 00:09:38.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.282 "is_configured": false, 00:09:38.282 "data_offset": 0, 00:09:38.282 "data_size": 0 00:09:38.282 } 00:09:38.282 ] 00:09:38.282 }' 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.282 13:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.542 [2024-11-20 13:23:20.148934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.542 [2024-11-20 13:23:20.149082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.542 [2024-11-20 13:23:20.160951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.542 [2024-11-20 13:23:20.162858] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.542 [2024-11-20 13:23:20.162960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.542 [2024-11-20 13:23:20.163011] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.542 [2024-11-20 13:23:20.163042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.542 [2024-11-20 13:23:20.163065] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:38.542 [2024-11-20 13:23:20.163091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.542 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.543 "name": "Existed_Raid", 00:09:38.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.543 "strip_size_kb": 64, 00:09:38.543 "state": "configuring", 00:09:38.543 "raid_level": "raid0", 00:09:38.543 "superblock": false, 00:09:38.543 "num_base_bdevs": 4, 00:09:38.543 "num_base_bdevs_discovered": 1, 00:09:38.543 "num_base_bdevs_operational": 4, 00:09:38.543 "base_bdevs_list": [ 00:09:38.543 { 00:09:38.543 "name": "BaseBdev1", 00:09:38.543 "uuid": "e365fed6-e3ae-46fc-b23b-fc9e9cdb02a5", 00:09:38.543 "is_configured": true, 00:09:38.543 "data_offset": 0, 00:09:38.543 "data_size": 65536 00:09:38.543 }, 00:09:38.543 { 00:09:38.543 "name": "BaseBdev2", 00:09:38.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.543 "is_configured": false, 00:09:38.543 "data_offset": 0, 00:09:38.543 "data_size": 0 00:09:38.543 }, 00:09:38.543 { 00:09:38.543 "name": "BaseBdev3", 00:09:38.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.543 "is_configured": false, 00:09:38.543 "data_offset": 0, 00:09:38.543 "data_size": 0 00:09:38.543 }, 00:09:38.543 { 00:09:38.543 "name": "BaseBdev4", 00:09:38.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.543 "is_configured": false, 00:09:38.543 "data_offset": 0, 00:09:38.543 "data_size": 0 00:09:38.543 } 00:09:38.543 ] 00:09:38.543 }' 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.543 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.111 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:39.111 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.111 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.111 [2024-11-20 13:23:20.595445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.111 BaseBdev2 00:09:39.111 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.111 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:39.111 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.112 [ 00:09:39.112 { 00:09:39.112 "name": "BaseBdev2", 00:09:39.112 "aliases": [ 00:09:39.112 "65757093-cb1b-4e2d-b63f-e8d3be92c079" 00:09:39.112 ], 00:09:39.112 "product_name": "Malloc disk", 00:09:39.112 "block_size": 512, 00:09:39.112 "num_blocks": 65536, 00:09:39.112 "uuid": "65757093-cb1b-4e2d-b63f-e8d3be92c079", 00:09:39.112 "assigned_rate_limits": { 00:09:39.112 "rw_ios_per_sec": 0, 00:09:39.112 "rw_mbytes_per_sec": 0, 00:09:39.112 "r_mbytes_per_sec": 0, 00:09:39.112 "w_mbytes_per_sec": 0 00:09:39.112 }, 00:09:39.112 "claimed": true, 00:09:39.112 "claim_type": "exclusive_write", 00:09:39.112 "zoned": false, 00:09:39.112 "supported_io_types": { 00:09:39.112 "read": true, 00:09:39.112 "write": true, 00:09:39.112 "unmap": true, 00:09:39.112 "flush": true, 00:09:39.112 "reset": true, 00:09:39.112 "nvme_admin": false, 00:09:39.112 "nvme_io": false, 00:09:39.112 "nvme_io_md": false, 00:09:39.112 "write_zeroes": true, 00:09:39.112 "zcopy": true, 00:09:39.112 "get_zone_info": false, 00:09:39.112 "zone_management": false, 00:09:39.112 "zone_append": false, 00:09:39.112 "compare": false, 00:09:39.112 "compare_and_write": false, 00:09:39.112 "abort": true, 00:09:39.112 "seek_hole": false, 00:09:39.112 "seek_data": false, 00:09:39.112 "copy": true, 00:09:39.112 "nvme_iov_md": false 00:09:39.112 }, 00:09:39.112 "memory_domains": [ 00:09:39.112 { 00:09:39.112 "dma_device_id": "system", 00:09:39.112 "dma_device_type": 1 00:09:39.112 }, 00:09:39.112 { 00:09:39.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.112 "dma_device_type": 2 00:09:39.112 } 00:09:39.112 ], 00:09:39.112 "driver_specific": {} 00:09:39.112 } 00:09:39.112 ] 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.112 "name": "Existed_Raid", 00:09:39.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.112 "strip_size_kb": 64, 00:09:39.112 "state": "configuring", 00:09:39.112 "raid_level": "raid0", 00:09:39.112 "superblock": false, 00:09:39.112 "num_base_bdevs": 4, 00:09:39.112 "num_base_bdevs_discovered": 2, 00:09:39.112 "num_base_bdevs_operational": 4, 00:09:39.112 "base_bdevs_list": [ 00:09:39.112 { 00:09:39.112 "name": "BaseBdev1", 00:09:39.112 "uuid": "e365fed6-e3ae-46fc-b23b-fc9e9cdb02a5", 00:09:39.112 "is_configured": true, 00:09:39.112 "data_offset": 0, 00:09:39.112 "data_size": 65536 00:09:39.112 }, 00:09:39.112 { 00:09:39.112 "name": "BaseBdev2", 00:09:39.112 "uuid": "65757093-cb1b-4e2d-b63f-e8d3be92c079", 00:09:39.112 "is_configured": true, 00:09:39.112 "data_offset": 0, 00:09:39.112 "data_size": 65536 00:09:39.112 }, 00:09:39.112 { 00:09:39.112 "name": "BaseBdev3", 00:09:39.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.112 "is_configured": false, 00:09:39.112 "data_offset": 0, 00:09:39.112 "data_size": 0 00:09:39.112 }, 00:09:39.112 { 00:09:39.112 "name": "BaseBdev4", 00:09:39.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.112 "is_configured": false, 00:09:39.112 "data_offset": 0, 00:09:39.112 "data_size": 0 00:09:39.112 } 00:09:39.112 ] 00:09:39.112 }' 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.112 13:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.681 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.681 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.681 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.681 [2024-11-20 13:23:21.148624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.681 BaseBdev3 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.682 [ 00:09:39.682 { 00:09:39.682 "name": "BaseBdev3", 00:09:39.682 "aliases": [ 00:09:39.682 "b0b01870-dd04-47c9-8d03-bdb493f662ef" 00:09:39.682 ], 00:09:39.682 "product_name": "Malloc disk", 00:09:39.682 "block_size": 512, 00:09:39.682 "num_blocks": 65536, 00:09:39.682 "uuid": "b0b01870-dd04-47c9-8d03-bdb493f662ef", 00:09:39.682 "assigned_rate_limits": { 00:09:39.682 "rw_ios_per_sec": 0, 00:09:39.682 "rw_mbytes_per_sec": 0, 00:09:39.682 "r_mbytes_per_sec": 0, 00:09:39.682 "w_mbytes_per_sec": 0 00:09:39.682 }, 00:09:39.682 "claimed": true, 00:09:39.682 "claim_type": "exclusive_write", 00:09:39.682 "zoned": false, 00:09:39.682 "supported_io_types": { 00:09:39.682 "read": true, 00:09:39.682 "write": true, 00:09:39.682 "unmap": true, 00:09:39.682 "flush": true, 00:09:39.682 "reset": true, 00:09:39.682 "nvme_admin": false, 00:09:39.682 "nvme_io": false, 00:09:39.682 "nvme_io_md": false, 00:09:39.682 "write_zeroes": true, 00:09:39.682 "zcopy": true, 00:09:39.682 "get_zone_info": false, 00:09:39.682 "zone_management": false, 00:09:39.682 "zone_append": false, 00:09:39.682 "compare": false, 00:09:39.682 "compare_and_write": false, 00:09:39.682 "abort": true, 00:09:39.682 "seek_hole": false, 00:09:39.682 "seek_data": false, 00:09:39.682 "copy": true, 00:09:39.682 "nvme_iov_md": false 00:09:39.682 }, 00:09:39.682 "memory_domains": [ 00:09:39.682 { 00:09:39.682 "dma_device_id": "system", 00:09:39.682 "dma_device_type": 1 00:09:39.682 }, 00:09:39.682 { 00:09:39.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.682 "dma_device_type": 2 00:09:39.682 } 00:09:39.682 ], 00:09:39.682 "driver_specific": {} 00:09:39.682 } 00:09:39.682 ] 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.682 "name": "Existed_Raid", 00:09:39.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.682 "strip_size_kb": 64, 00:09:39.682 "state": "configuring", 00:09:39.682 "raid_level": "raid0", 00:09:39.682 "superblock": false, 00:09:39.682 "num_base_bdevs": 4, 00:09:39.682 "num_base_bdevs_discovered": 3, 00:09:39.682 "num_base_bdevs_operational": 4, 00:09:39.682 "base_bdevs_list": [ 00:09:39.682 { 00:09:39.682 "name": "BaseBdev1", 00:09:39.682 "uuid": "e365fed6-e3ae-46fc-b23b-fc9e9cdb02a5", 00:09:39.682 "is_configured": true, 00:09:39.682 "data_offset": 0, 00:09:39.682 "data_size": 65536 00:09:39.682 }, 00:09:39.682 { 00:09:39.682 "name": "BaseBdev2", 00:09:39.682 "uuid": "65757093-cb1b-4e2d-b63f-e8d3be92c079", 00:09:39.682 "is_configured": true, 00:09:39.682 "data_offset": 0, 00:09:39.682 "data_size": 65536 00:09:39.682 }, 00:09:39.682 { 00:09:39.682 "name": "BaseBdev3", 00:09:39.682 "uuid": "b0b01870-dd04-47c9-8d03-bdb493f662ef", 00:09:39.682 "is_configured": true, 00:09:39.682 "data_offset": 0, 00:09:39.682 "data_size": 65536 00:09:39.682 }, 00:09:39.682 { 00:09:39.682 "name": "BaseBdev4", 00:09:39.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.682 "is_configured": false, 00:09:39.682 "data_offset": 0, 00:09:39.682 "data_size": 0 00:09:39.682 } 00:09:39.682 ] 00:09:39.682 }' 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.682 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.944 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:39.944 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.944 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.203 [2024-11-20 13:23:21.611337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:40.203 [2024-11-20 13:23:21.611499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:40.203 [2024-11-20 13:23:21.611532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:40.203 [2024-11-20 13:23:21.611921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:40.203 [2024-11-20 13:23:21.612159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:40.203 [2024-11-20 13:23:21.612220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:40.203 [2024-11-20 13:23:21.612526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.203 BaseBdev4 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.203 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.203 [ 00:09:40.203 { 00:09:40.203 "name": "BaseBdev4", 00:09:40.203 "aliases": [ 00:09:40.204 "bd4bb5e5-e1a3-408a-a357-189dcd762978" 00:09:40.204 ], 00:09:40.204 "product_name": "Malloc disk", 00:09:40.204 "block_size": 512, 00:09:40.204 "num_blocks": 65536, 00:09:40.204 "uuid": "bd4bb5e5-e1a3-408a-a357-189dcd762978", 00:09:40.204 "assigned_rate_limits": { 00:09:40.204 "rw_ios_per_sec": 0, 00:09:40.204 "rw_mbytes_per_sec": 0, 00:09:40.204 "r_mbytes_per_sec": 0, 00:09:40.204 "w_mbytes_per_sec": 0 00:09:40.204 }, 00:09:40.204 "claimed": true, 00:09:40.204 "claim_type": "exclusive_write", 00:09:40.204 "zoned": false, 00:09:40.204 "supported_io_types": { 00:09:40.204 "read": true, 00:09:40.204 "write": true, 00:09:40.204 "unmap": true, 00:09:40.204 "flush": true, 00:09:40.204 "reset": true, 00:09:40.204 "nvme_admin": false, 00:09:40.204 "nvme_io": false, 00:09:40.204 "nvme_io_md": false, 00:09:40.204 "write_zeroes": true, 00:09:40.204 "zcopy": true, 00:09:40.204 "get_zone_info": false, 00:09:40.204 "zone_management": false, 00:09:40.204 "zone_append": false, 00:09:40.204 "compare": false, 00:09:40.204 "compare_and_write": false, 00:09:40.204 "abort": true, 00:09:40.204 "seek_hole": false, 00:09:40.204 "seek_data": false, 00:09:40.204 "copy": true, 00:09:40.204 "nvme_iov_md": false 00:09:40.204 }, 00:09:40.204 "memory_domains": [ 00:09:40.204 { 00:09:40.204 "dma_device_id": "system", 00:09:40.204 "dma_device_type": 1 00:09:40.204 }, 00:09:40.204 { 00:09:40.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.204 "dma_device_type": 2 00:09:40.204 } 00:09:40.204 ], 00:09:40.204 "driver_specific": {} 00:09:40.204 } 00:09:40.204 ] 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.204 "name": "Existed_Raid", 00:09:40.204 "uuid": "a0d0a55e-b10a-4f9f-bd58-8ddc0668dba2", 00:09:40.204 "strip_size_kb": 64, 00:09:40.204 "state": "online", 00:09:40.204 "raid_level": "raid0", 00:09:40.204 "superblock": false, 00:09:40.204 "num_base_bdevs": 4, 00:09:40.204 "num_base_bdevs_discovered": 4, 00:09:40.204 "num_base_bdevs_operational": 4, 00:09:40.204 "base_bdevs_list": [ 00:09:40.204 { 00:09:40.204 "name": "BaseBdev1", 00:09:40.204 "uuid": "e365fed6-e3ae-46fc-b23b-fc9e9cdb02a5", 00:09:40.204 "is_configured": true, 00:09:40.204 "data_offset": 0, 00:09:40.204 "data_size": 65536 00:09:40.204 }, 00:09:40.204 { 00:09:40.204 "name": "BaseBdev2", 00:09:40.204 "uuid": "65757093-cb1b-4e2d-b63f-e8d3be92c079", 00:09:40.204 "is_configured": true, 00:09:40.204 "data_offset": 0, 00:09:40.204 "data_size": 65536 00:09:40.204 }, 00:09:40.204 { 00:09:40.204 "name": "BaseBdev3", 00:09:40.204 "uuid": "b0b01870-dd04-47c9-8d03-bdb493f662ef", 00:09:40.204 "is_configured": true, 00:09:40.204 "data_offset": 0, 00:09:40.204 "data_size": 65536 00:09:40.204 }, 00:09:40.204 { 00:09:40.204 "name": "BaseBdev4", 00:09:40.204 "uuid": "bd4bb5e5-e1a3-408a-a357-189dcd762978", 00:09:40.204 "is_configured": true, 00:09:40.204 "data_offset": 0, 00:09:40.204 "data_size": 65536 00:09:40.204 } 00:09:40.204 ] 00:09:40.204 }' 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.204 13:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.464 [2024-11-20 13:23:22.098940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.464 "name": "Existed_Raid", 00:09:40.464 "aliases": [ 00:09:40.464 "a0d0a55e-b10a-4f9f-bd58-8ddc0668dba2" 00:09:40.464 ], 00:09:40.464 "product_name": "Raid Volume", 00:09:40.464 "block_size": 512, 00:09:40.464 "num_blocks": 262144, 00:09:40.464 "uuid": "a0d0a55e-b10a-4f9f-bd58-8ddc0668dba2", 00:09:40.464 "assigned_rate_limits": { 00:09:40.464 "rw_ios_per_sec": 0, 00:09:40.464 "rw_mbytes_per_sec": 0, 00:09:40.464 "r_mbytes_per_sec": 0, 00:09:40.464 "w_mbytes_per_sec": 0 00:09:40.464 }, 00:09:40.464 "claimed": false, 00:09:40.464 "zoned": false, 00:09:40.464 "supported_io_types": { 00:09:40.464 "read": true, 00:09:40.464 "write": true, 00:09:40.464 "unmap": true, 00:09:40.464 "flush": true, 00:09:40.464 "reset": true, 00:09:40.464 "nvme_admin": false, 00:09:40.464 "nvme_io": false, 00:09:40.464 "nvme_io_md": false, 00:09:40.464 "write_zeroes": true, 00:09:40.464 "zcopy": false, 00:09:40.464 "get_zone_info": false, 00:09:40.464 "zone_management": false, 00:09:40.464 "zone_append": false, 00:09:40.464 "compare": false, 00:09:40.464 "compare_and_write": false, 00:09:40.464 "abort": false, 00:09:40.464 "seek_hole": false, 00:09:40.464 "seek_data": false, 00:09:40.464 "copy": false, 00:09:40.464 "nvme_iov_md": false 00:09:40.464 }, 00:09:40.464 "memory_domains": [ 00:09:40.464 { 00:09:40.464 "dma_device_id": "system", 00:09:40.464 "dma_device_type": 1 00:09:40.464 }, 00:09:40.464 { 00:09:40.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.464 "dma_device_type": 2 00:09:40.464 }, 00:09:40.464 { 00:09:40.464 "dma_device_id": "system", 00:09:40.464 "dma_device_type": 1 00:09:40.464 }, 00:09:40.464 { 00:09:40.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.464 "dma_device_type": 2 00:09:40.464 }, 00:09:40.464 { 00:09:40.464 "dma_device_id": "system", 00:09:40.464 "dma_device_type": 1 00:09:40.464 }, 00:09:40.464 { 00:09:40.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.464 "dma_device_type": 2 00:09:40.464 }, 00:09:40.464 { 00:09:40.464 "dma_device_id": "system", 00:09:40.464 "dma_device_type": 1 00:09:40.464 }, 00:09:40.464 { 00:09:40.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.464 "dma_device_type": 2 00:09:40.464 } 00:09:40.464 ], 00:09:40.464 "driver_specific": { 00:09:40.464 "raid": { 00:09:40.464 "uuid": "a0d0a55e-b10a-4f9f-bd58-8ddc0668dba2", 00:09:40.464 "strip_size_kb": 64, 00:09:40.464 "state": "online", 00:09:40.464 "raid_level": "raid0", 00:09:40.464 "superblock": false, 00:09:40.464 "num_base_bdevs": 4, 00:09:40.464 "num_base_bdevs_discovered": 4, 00:09:40.464 "num_base_bdevs_operational": 4, 00:09:40.464 "base_bdevs_list": [ 00:09:40.464 { 00:09:40.464 "name": "BaseBdev1", 00:09:40.464 "uuid": "e365fed6-e3ae-46fc-b23b-fc9e9cdb02a5", 00:09:40.464 "is_configured": true, 00:09:40.464 "data_offset": 0, 00:09:40.464 "data_size": 65536 00:09:40.464 }, 00:09:40.464 { 00:09:40.464 "name": "BaseBdev2", 00:09:40.464 "uuid": "65757093-cb1b-4e2d-b63f-e8d3be92c079", 00:09:40.464 "is_configured": true, 00:09:40.464 "data_offset": 0, 00:09:40.464 "data_size": 65536 00:09:40.464 }, 00:09:40.464 { 00:09:40.464 "name": "BaseBdev3", 00:09:40.464 "uuid": "b0b01870-dd04-47c9-8d03-bdb493f662ef", 00:09:40.464 "is_configured": true, 00:09:40.464 "data_offset": 0, 00:09:40.464 "data_size": 65536 00:09:40.464 }, 00:09:40.464 { 00:09:40.464 "name": "BaseBdev4", 00:09:40.464 "uuid": "bd4bb5e5-e1a3-408a-a357-189dcd762978", 00:09:40.464 "is_configured": true, 00:09:40.464 "data_offset": 0, 00:09:40.464 "data_size": 65536 00:09:40.464 } 00:09:40.464 ] 00:09:40.464 } 00:09:40.464 } 00:09:40.464 }' 00:09:40.464 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.723 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:40.723 BaseBdev2 00:09:40.723 BaseBdev3 00:09:40.723 BaseBdev4' 00:09:40.723 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.723 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.724 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.983 [2024-11-20 13:23:22.414164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:40.983 [2024-11-20 13:23:22.414201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.983 [2024-11-20 13:23:22.414271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.983 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.983 "name": "Existed_Raid", 00:09:40.983 "uuid": "a0d0a55e-b10a-4f9f-bd58-8ddc0668dba2", 00:09:40.983 "strip_size_kb": 64, 00:09:40.983 "state": "offline", 00:09:40.983 "raid_level": "raid0", 00:09:40.983 "superblock": false, 00:09:40.983 "num_base_bdevs": 4, 00:09:40.983 "num_base_bdevs_discovered": 3, 00:09:40.983 "num_base_bdevs_operational": 3, 00:09:40.983 "base_bdevs_list": [ 00:09:40.983 { 00:09:40.983 "name": null, 00:09:40.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.983 "is_configured": false, 00:09:40.983 "data_offset": 0, 00:09:40.983 "data_size": 65536 00:09:40.983 }, 00:09:40.983 { 00:09:40.983 "name": "BaseBdev2", 00:09:40.983 "uuid": "65757093-cb1b-4e2d-b63f-e8d3be92c079", 00:09:40.983 "is_configured": true, 00:09:40.983 "data_offset": 0, 00:09:40.983 "data_size": 65536 00:09:40.983 }, 00:09:40.983 { 00:09:40.983 "name": "BaseBdev3", 00:09:40.983 "uuid": "b0b01870-dd04-47c9-8d03-bdb493f662ef", 00:09:40.983 "is_configured": true, 00:09:40.983 "data_offset": 0, 00:09:40.983 "data_size": 65536 00:09:40.983 }, 00:09:40.983 { 00:09:40.983 "name": "BaseBdev4", 00:09:40.984 "uuid": "bd4bb5e5-e1a3-408a-a357-189dcd762978", 00:09:40.984 "is_configured": true, 00:09:40.984 "data_offset": 0, 00:09:40.984 "data_size": 65536 00:09:40.984 } 00:09:40.984 ] 00:09:40.984 }' 00:09:40.984 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.984 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.243 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:41.243 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.243 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.243 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.243 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.243 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.243 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.503 [2024-11-20 13:23:22.917161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.503 [2024-11-20 13:23:22.984503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.503 13:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.503 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.503 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.503 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.503 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.503 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.503 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.503 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.503 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:41.503 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.503 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.504 [2024-11-20 13:23:23.055865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:41.504 [2024-11-20 13:23:23.055973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.504 BaseBdev2 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.504 [ 00:09:41.504 { 00:09:41.504 "name": "BaseBdev2", 00:09:41.504 "aliases": [ 00:09:41.504 "59d0a2af-925f-417e-9c39-2ee935a4c349" 00:09:41.504 ], 00:09:41.504 "product_name": "Malloc disk", 00:09:41.504 "block_size": 512, 00:09:41.504 "num_blocks": 65536, 00:09:41.504 "uuid": "59d0a2af-925f-417e-9c39-2ee935a4c349", 00:09:41.504 "assigned_rate_limits": { 00:09:41.504 "rw_ios_per_sec": 0, 00:09:41.504 "rw_mbytes_per_sec": 0, 00:09:41.504 "r_mbytes_per_sec": 0, 00:09:41.504 "w_mbytes_per_sec": 0 00:09:41.504 }, 00:09:41.504 "claimed": false, 00:09:41.504 "zoned": false, 00:09:41.504 "supported_io_types": { 00:09:41.504 "read": true, 00:09:41.504 "write": true, 00:09:41.504 "unmap": true, 00:09:41.504 "flush": true, 00:09:41.504 "reset": true, 00:09:41.504 "nvme_admin": false, 00:09:41.504 "nvme_io": false, 00:09:41.504 "nvme_io_md": false, 00:09:41.504 "write_zeroes": true, 00:09:41.504 "zcopy": true, 00:09:41.504 "get_zone_info": false, 00:09:41.504 "zone_management": false, 00:09:41.504 "zone_append": false, 00:09:41.504 "compare": false, 00:09:41.504 "compare_and_write": false, 00:09:41.504 "abort": true, 00:09:41.504 "seek_hole": false, 00:09:41.504 "seek_data": false, 00:09:41.504 "copy": true, 00:09:41.504 "nvme_iov_md": false 00:09:41.504 }, 00:09:41.504 "memory_domains": [ 00:09:41.504 { 00:09:41.504 "dma_device_id": "system", 00:09:41.504 "dma_device_type": 1 00:09:41.504 }, 00:09:41.504 { 00:09:41.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.504 "dma_device_type": 2 00:09:41.504 } 00:09:41.504 ], 00:09:41.504 "driver_specific": {} 00:09:41.504 } 00:09:41.504 ] 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.504 BaseBdev3 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.504 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.765 [ 00:09:41.765 { 00:09:41.765 "name": "BaseBdev3", 00:09:41.765 "aliases": [ 00:09:41.765 "ac99002f-e889-492e-865b-e979a32cbc2b" 00:09:41.765 ], 00:09:41.765 "product_name": "Malloc disk", 00:09:41.765 "block_size": 512, 00:09:41.765 "num_blocks": 65536, 00:09:41.765 "uuid": "ac99002f-e889-492e-865b-e979a32cbc2b", 00:09:41.765 "assigned_rate_limits": { 00:09:41.765 "rw_ios_per_sec": 0, 00:09:41.765 "rw_mbytes_per_sec": 0, 00:09:41.765 "r_mbytes_per_sec": 0, 00:09:41.765 "w_mbytes_per_sec": 0 00:09:41.765 }, 00:09:41.765 "claimed": false, 00:09:41.765 "zoned": false, 00:09:41.765 "supported_io_types": { 00:09:41.765 "read": true, 00:09:41.765 "write": true, 00:09:41.765 "unmap": true, 00:09:41.765 "flush": true, 00:09:41.765 "reset": true, 00:09:41.765 "nvme_admin": false, 00:09:41.765 "nvme_io": false, 00:09:41.765 "nvme_io_md": false, 00:09:41.765 "write_zeroes": true, 00:09:41.765 "zcopy": true, 00:09:41.765 "get_zone_info": false, 00:09:41.765 "zone_management": false, 00:09:41.765 "zone_append": false, 00:09:41.765 "compare": false, 00:09:41.765 "compare_and_write": false, 00:09:41.765 "abort": true, 00:09:41.765 "seek_hole": false, 00:09:41.765 "seek_data": false, 00:09:41.765 "copy": true, 00:09:41.765 "nvme_iov_md": false 00:09:41.765 }, 00:09:41.765 "memory_domains": [ 00:09:41.765 { 00:09:41.765 "dma_device_id": "system", 00:09:41.765 "dma_device_type": 1 00:09:41.765 }, 00:09:41.765 { 00:09:41.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.765 "dma_device_type": 2 00:09:41.765 } 00:09:41.765 ], 00:09:41.765 "driver_specific": {} 00:09:41.765 } 00:09:41.765 ] 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.765 BaseBdev4 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.765 [ 00:09:41.765 { 00:09:41.765 "name": "BaseBdev4", 00:09:41.765 "aliases": [ 00:09:41.765 "0c17eecf-549a-49b9-8d2e-23ff231d20dd" 00:09:41.765 ], 00:09:41.765 "product_name": "Malloc disk", 00:09:41.765 "block_size": 512, 00:09:41.765 "num_blocks": 65536, 00:09:41.765 "uuid": "0c17eecf-549a-49b9-8d2e-23ff231d20dd", 00:09:41.765 "assigned_rate_limits": { 00:09:41.765 "rw_ios_per_sec": 0, 00:09:41.765 "rw_mbytes_per_sec": 0, 00:09:41.765 "r_mbytes_per_sec": 0, 00:09:41.765 "w_mbytes_per_sec": 0 00:09:41.765 }, 00:09:41.765 "claimed": false, 00:09:41.765 "zoned": false, 00:09:41.765 "supported_io_types": { 00:09:41.765 "read": true, 00:09:41.765 "write": true, 00:09:41.765 "unmap": true, 00:09:41.765 "flush": true, 00:09:41.765 "reset": true, 00:09:41.765 "nvme_admin": false, 00:09:41.765 "nvme_io": false, 00:09:41.765 "nvme_io_md": false, 00:09:41.765 "write_zeroes": true, 00:09:41.765 "zcopy": true, 00:09:41.765 "get_zone_info": false, 00:09:41.765 "zone_management": false, 00:09:41.765 "zone_append": false, 00:09:41.765 "compare": false, 00:09:41.765 "compare_and_write": false, 00:09:41.765 "abort": true, 00:09:41.765 "seek_hole": false, 00:09:41.765 "seek_data": false, 00:09:41.765 "copy": true, 00:09:41.765 "nvme_iov_md": false 00:09:41.765 }, 00:09:41.765 "memory_domains": [ 00:09:41.765 { 00:09:41.765 "dma_device_id": "system", 00:09:41.765 "dma_device_type": 1 00:09:41.765 }, 00:09:41.765 { 00:09:41.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.765 "dma_device_type": 2 00:09:41.765 } 00:09:41.765 ], 00:09:41.765 "driver_specific": {} 00:09:41.765 } 00:09:41.765 ] 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.765 [2024-11-20 13:23:23.241509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.765 [2024-11-20 13:23:23.241606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.765 [2024-11-20 13:23:23.241679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.765 [2024-11-20 13:23:23.243643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.765 [2024-11-20 13:23:23.243760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.765 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.766 "name": "Existed_Raid", 00:09:41.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.766 "strip_size_kb": 64, 00:09:41.766 "state": "configuring", 00:09:41.766 "raid_level": "raid0", 00:09:41.766 "superblock": false, 00:09:41.766 "num_base_bdevs": 4, 00:09:41.766 "num_base_bdevs_discovered": 3, 00:09:41.766 "num_base_bdevs_operational": 4, 00:09:41.766 "base_bdevs_list": [ 00:09:41.766 { 00:09:41.766 "name": "BaseBdev1", 00:09:41.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.766 "is_configured": false, 00:09:41.766 "data_offset": 0, 00:09:41.766 "data_size": 0 00:09:41.766 }, 00:09:41.766 { 00:09:41.766 "name": "BaseBdev2", 00:09:41.766 "uuid": "59d0a2af-925f-417e-9c39-2ee935a4c349", 00:09:41.766 "is_configured": true, 00:09:41.766 "data_offset": 0, 00:09:41.766 "data_size": 65536 00:09:41.766 }, 00:09:41.766 { 00:09:41.766 "name": "BaseBdev3", 00:09:41.766 "uuid": "ac99002f-e889-492e-865b-e979a32cbc2b", 00:09:41.766 "is_configured": true, 00:09:41.766 "data_offset": 0, 00:09:41.766 "data_size": 65536 00:09:41.766 }, 00:09:41.766 { 00:09:41.766 "name": "BaseBdev4", 00:09:41.766 "uuid": "0c17eecf-549a-49b9-8d2e-23ff231d20dd", 00:09:41.766 "is_configured": true, 00:09:41.766 "data_offset": 0, 00:09:41.766 "data_size": 65536 00:09:41.766 } 00:09:41.766 ] 00:09:41.766 }' 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.766 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.026 [2024-11-20 13:23:23.616909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.026 "name": "Existed_Raid", 00:09:42.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.026 "strip_size_kb": 64, 00:09:42.026 "state": "configuring", 00:09:42.026 "raid_level": "raid0", 00:09:42.026 "superblock": false, 00:09:42.026 "num_base_bdevs": 4, 00:09:42.026 "num_base_bdevs_discovered": 2, 00:09:42.026 "num_base_bdevs_operational": 4, 00:09:42.026 "base_bdevs_list": [ 00:09:42.026 { 00:09:42.026 "name": "BaseBdev1", 00:09:42.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.026 "is_configured": false, 00:09:42.026 "data_offset": 0, 00:09:42.026 "data_size": 0 00:09:42.026 }, 00:09:42.026 { 00:09:42.026 "name": null, 00:09:42.026 "uuid": "59d0a2af-925f-417e-9c39-2ee935a4c349", 00:09:42.026 "is_configured": false, 00:09:42.026 "data_offset": 0, 00:09:42.026 "data_size": 65536 00:09:42.026 }, 00:09:42.026 { 00:09:42.026 "name": "BaseBdev3", 00:09:42.026 "uuid": "ac99002f-e889-492e-865b-e979a32cbc2b", 00:09:42.026 "is_configured": true, 00:09:42.026 "data_offset": 0, 00:09:42.026 "data_size": 65536 00:09:42.026 }, 00:09:42.026 { 00:09:42.026 "name": "BaseBdev4", 00:09:42.026 "uuid": "0c17eecf-549a-49b9-8d2e-23ff231d20dd", 00:09:42.026 "is_configured": true, 00:09:42.026 "data_offset": 0, 00:09:42.026 "data_size": 65536 00:09:42.026 } 00:09:42.026 ] 00:09:42.026 }' 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.026 13:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.595 BaseBdev1 00:09:42.595 [2024-11-20 13:23:24.083356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.595 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.596 [ 00:09:42.596 { 00:09:42.596 "name": "BaseBdev1", 00:09:42.596 "aliases": [ 00:09:42.596 "9b509cdf-fffc-4b65-842e-da8916244485" 00:09:42.596 ], 00:09:42.596 "product_name": "Malloc disk", 00:09:42.596 "block_size": 512, 00:09:42.596 "num_blocks": 65536, 00:09:42.596 "uuid": "9b509cdf-fffc-4b65-842e-da8916244485", 00:09:42.596 "assigned_rate_limits": { 00:09:42.596 "rw_ios_per_sec": 0, 00:09:42.596 "rw_mbytes_per_sec": 0, 00:09:42.596 "r_mbytes_per_sec": 0, 00:09:42.596 "w_mbytes_per_sec": 0 00:09:42.596 }, 00:09:42.596 "claimed": true, 00:09:42.596 "claim_type": "exclusive_write", 00:09:42.596 "zoned": false, 00:09:42.596 "supported_io_types": { 00:09:42.596 "read": true, 00:09:42.596 "write": true, 00:09:42.596 "unmap": true, 00:09:42.596 "flush": true, 00:09:42.596 "reset": true, 00:09:42.596 "nvme_admin": false, 00:09:42.596 "nvme_io": false, 00:09:42.596 "nvme_io_md": false, 00:09:42.596 "write_zeroes": true, 00:09:42.596 "zcopy": true, 00:09:42.596 "get_zone_info": false, 00:09:42.596 "zone_management": false, 00:09:42.596 "zone_append": false, 00:09:42.596 "compare": false, 00:09:42.596 "compare_and_write": false, 00:09:42.596 "abort": true, 00:09:42.596 "seek_hole": false, 00:09:42.596 "seek_data": false, 00:09:42.596 "copy": true, 00:09:42.596 "nvme_iov_md": false 00:09:42.596 }, 00:09:42.596 "memory_domains": [ 00:09:42.596 { 00:09:42.596 "dma_device_id": "system", 00:09:42.596 "dma_device_type": 1 00:09:42.596 }, 00:09:42.596 { 00:09:42.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.596 "dma_device_type": 2 00:09:42.596 } 00:09:42.596 ], 00:09:42.596 "driver_specific": {} 00:09:42.596 } 00:09:42.596 ] 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.596 "name": "Existed_Raid", 00:09:42.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.596 "strip_size_kb": 64, 00:09:42.596 "state": "configuring", 00:09:42.596 "raid_level": "raid0", 00:09:42.596 "superblock": false, 00:09:42.596 "num_base_bdevs": 4, 00:09:42.596 "num_base_bdevs_discovered": 3, 00:09:42.596 "num_base_bdevs_operational": 4, 00:09:42.596 "base_bdevs_list": [ 00:09:42.596 { 00:09:42.596 "name": "BaseBdev1", 00:09:42.596 "uuid": "9b509cdf-fffc-4b65-842e-da8916244485", 00:09:42.596 "is_configured": true, 00:09:42.596 "data_offset": 0, 00:09:42.596 "data_size": 65536 00:09:42.596 }, 00:09:42.596 { 00:09:42.596 "name": null, 00:09:42.596 "uuid": "59d0a2af-925f-417e-9c39-2ee935a4c349", 00:09:42.596 "is_configured": false, 00:09:42.596 "data_offset": 0, 00:09:42.596 "data_size": 65536 00:09:42.596 }, 00:09:42.596 { 00:09:42.596 "name": "BaseBdev3", 00:09:42.596 "uuid": "ac99002f-e889-492e-865b-e979a32cbc2b", 00:09:42.596 "is_configured": true, 00:09:42.596 "data_offset": 0, 00:09:42.596 "data_size": 65536 00:09:42.596 }, 00:09:42.596 { 00:09:42.596 "name": "BaseBdev4", 00:09:42.596 "uuid": "0c17eecf-549a-49b9-8d2e-23ff231d20dd", 00:09:42.596 "is_configured": true, 00:09:42.596 "data_offset": 0, 00:09:42.596 "data_size": 65536 00:09:42.596 } 00:09:42.596 ] 00:09:42.596 }' 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.596 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.167 [2024-11-20 13:23:24.602676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.167 "name": "Existed_Raid", 00:09:43.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.167 "strip_size_kb": 64, 00:09:43.167 "state": "configuring", 00:09:43.167 "raid_level": "raid0", 00:09:43.167 "superblock": false, 00:09:43.167 "num_base_bdevs": 4, 00:09:43.167 "num_base_bdevs_discovered": 2, 00:09:43.167 "num_base_bdevs_operational": 4, 00:09:43.167 "base_bdevs_list": [ 00:09:43.167 { 00:09:43.167 "name": "BaseBdev1", 00:09:43.167 "uuid": "9b509cdf-fffc-4b65-842e-da8916244485", 00:09:43.167 "is_configured": true, 00:09:43.167 "data_offset": 0, 00:09:43.167 "data_size": 65536 00:09:43.167 }, 00:09:43.167 { 00:09:43.167 "name": null, 00:09:43.167 "uuid": "59d0a2af-925f-417e-9c39-2ee935a4c349", 00:09:43.167 "is_configured": false, 00:09:43.167 "data_offset": 0, 00:09:43.167 "data_size": 65536 00:09:43.167 }, 00:09:43.167 { 00:09:43.167 "name": null, 00:09:43.167 "uuid": "ac99002f-e889-492e-865b-e979a32cbc2b", 00:09:43.167 "is_configured": false, 00:09:43.167 "data_offset": 0, 00:09:43.167 "data_size": 65536 00:09:43.167 }, 00:09:43.167 { 00:09:43.167 "name": "BaseBdev4", 00:09:43.167 "uuid": "0c17eecf-549a-49b9-8d2e-23ff231d20dd", 00:09:43.167 "is_configured": true, 00:09:43.167 "data_offset": 0, 00:09:43.167 "data_size": 65536 00:09:43.167 } 00:09:43.167 ] 00:09:43.167 }' 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.167 13:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.426 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.426 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.426 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.426 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.426 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.426 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:43.426 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:43.426 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.426 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.685 [2024-11-20 13:23:25.097905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.685 "name": "Existed_Raid", 00:09:43.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.685 "strip_size_kb": 64, 00:09:43.685 "state": "configuring", 00:09:43.685 "raid_level": "raid0", 00:09:43.685 "superblock": false, 00:09:43.685 "num_base_bdevs": 4, 00:09:43.685 "num_base_bdevs_discovered": 3, 00:09:43.685 "num_base_bdevs_operational": 4, 00:09:43.685 "base_bdevs_list": [ 00:09:43.685 { 00:09:43.685 "name": "BaseBdev1", 00:09:43.685 "uuid": "9b509cdf-fffc-4b65-842e-da8916244485", 00:09:43.685 "is_configured": true, 00:09:43.685 "data_offset": 0, 00:09:43.685 "data_size": 65536 00:09:43.685 }, 00:09:43.685 { 00:09:43.685 "name": null, 00:09:43.685 "uuid": "59d0a2af-925f-417e-9c39-2ee935a4c349", 00:09:43.685 "is_configured": false, 00:09:43.685 "data_offset": 0, 00:09:43.685 "data_size": 65536 00:09:43.685 }, 00:09:43.685 { 00:09:43.685 "name": "BaseBdev3", 00:09:43.685 "uuid": "ac99002f-e889-492e-865b-e979a32cbc2b", 00:09:43.685 "is_configured": true, 00:09:43.685 "data_offset": 0, 00:09:43.685 "data_size": 65536 00:09:43.685 }, 00:09:43.685 { 00:09:43.685 "name": "BaseBdev4", 00:09:43.685 "uuid": "0c17eecf-549a-49b9-8d2e-23ff231d20dd", 00:09:43.685 "is_configured": true, 00:09:43.685 "data_offset": 0, 00:09:43.685 "data_size": 65536 00:09:43.685 } 00:09:43.685 ] 00:09:43.685 }' 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.685 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.944 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.944 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.944 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.944 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.944 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.944 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:43.944 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:43.944 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.944 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.204 [2024-11-20 13:23:25.613051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.204 "name": "Existed_Raid", 00:09:44.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.204 "strip_size_kb": 64, 00:09:44.204 "state": "configuring", 00:09:44.204 "raid_level": "raid0", 00:09:44.204 "superblock": false, 00:09:44.204 "num_base_bdevs": 4, 00:09:44.204 "num_base_bdevs_discovered": 2, 00:09:44.204 "num_base_bdevs_operational": 4, 00:09:44.204 "base_bdevs_list": [ 00:09:44.204 { 00:09:44.204 "name": null, 00:09:44.204 "uuid": "9b509cdf-fffc-4b65-842e-da8916244485", 00:09:44.204 "is_configured": false, 00:09:44.204 "data_offset": 0, 00:09:44.204 "data_size": 65536 00:09:44.204 }, 00:09:44.204 { 00:09:44.204 "name": null, 00:09:44.204 "uuid": "59d0a2af-925f-417e-9c39-2ee935a4c349", 00:09:44.204 "is_configured": false, 00:09:44.204 "data_offset": 0, 00:09:44.204 "data_size": 65536 00:09:44.204 }, 00:09:44.204 { 00:09:44.204 "name": "BaseBdev3", 00:09:44.204 "uuid": "ac99002f-e889-492e-865b-e979a32cbc2b", 00:09:44.204 "is_configured": true, 00:09:44.204 "data_offset": 0, 00:09:44.204 "data_size": 65536 00:09:44.204 }, 00:09:44.204 { 00:09:44.204 "name": "BaseBdev4", 00:09:44.204 "uuid": "0c17eecf-549a-49b9-8d2e-23ff231d20dd", 00:09:44.204 "is_configured": true, 00:09:44.204 "data_offset": 0, 00:09:44.204 "data_size": 65536 00:09:44.204 } 00:09:44.204 ] 00:09:44.204 }' 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.204 13:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.464 [2024-11-20 13:23:26.086836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.464 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.724 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.724 "name": "Existed_Raid", 00:09:44.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.724 "strip_size_kb": 64, 00:09:44.724 "state": "configuring", 00:09:44.724 "raid_level": "raid0", 00:09:44.724 "superblock": false, 00:09:44.724 "num_base_bdevs": 4, 00:09:44.724 "num_base_bdevs_discovered": 3, 00:09:44.724 "num_base_bdevs_operational": 4, 00:09:44.724 "base_bdevs_list": [ 00:09:44.724 { 00:09:44.724 "name": null, 00:09:44.724 "uuid": "9b509cdf-fffc-4b65-842e-da8916244485", 00:09:44.724 "is_configured": false, 00:09:44.724 "data_offset": 0, 00:09:44.724 "data_size": 65536 00:09:44.724 }, 00:09:44.724 { 00:09:44.724 "name": "BaseBdev2", 00:09:44.724 "uuid": "59d0a2af-925f-417e-9c39-2ee935a4c349", 00:09:44.724 "is_configured": true, 00:09:44.724 "data_offset": 0, 00:09:44.724 "data_size": 65536 00:09:44.724 }, 00:09:44.724 { 00:09:44.724 "name": "BaseBdev3", 00:09:44.724 "uuid": "ac99002f-e889-492e-865b-e979a32cbc2b", 00:09:44.724 "is_configured": true, 00:09:44.724 "data_offset": 0, 00:09:44.724 "data_size": 65536 00:09:44.724 }, 00:09:44.724 { 00:09:44.724 "name": "BaseBdev4", 00:09:44.724 "uuid": "0c17eecf-549a-49b9-8d2e-23ff231d20dd", 00:09:44.724 "is_configured": true, 00:09:44.724 "data_offset": 0, 00:09:44.724 "data_size": 65536 00:09:44.724 } 00:09:44.724 ] 00:09:44.724 }' 00:09:44.724 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.724 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9b509cdf-fffc-4b65-842e-da8916244485 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.984 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.984 [2024-11-20 13:23:26.617240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:44.984 [2024-11-20 13:23:26.617376] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:44.984 [2024-11-20 13:23:26.617404] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:44.984 [2024-11-20 13:23:26.617723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:44.984 [2024-11-20 13:23:26.617896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:44.984 [2024-11-20 13:23:26.617943] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:44.984 [2024-11-20 13:23:26.618195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.984 NewBaseBdev 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.985 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.985 [ 00:09:44.985 { 00:09:44.985 "name": "NewBaseBdev", 00:09:44.985 "aliases": [ 00:09:44.985 "9b509cdf-fffc-4b65-842e-da8916244485" 00:09:44.985 ], 00:09:44.985 "product_name": "Malloc disk", 00:09:44.985 "block_size": 512, 00:09:44.985 "num_blocks": 65536, 00:09:44.985 "uuid": "9b509cdf-fffc-4b65-842e-da8916244485", 00:09:44.985 "assigned_rate_limits": { 00:09:44.985 "rw_ios_per_sec": 0, 00:09:44.985 "rw_mbytes_per_sec": 0, 00:09:44.985 "r_mbytes_per_sec": 0, 00:09:44.985 "w_mbytes_per_sec": 0 00:09:44.985 }, 00:09:44.985 "claimed": true, 00:09:44.985 "claim_type": "exclusive_write", 00:09:44.985 "zoned": false, 00:09:44.985 "supported_io_types": { 00:09:44.985 "read": true, 00:09:45.245 "write": true, 00:09:45.245 "unmap": true, 00:09:45.245 "flush": true, 00:09:45.245 "reset": true, 00:09:45.245 "nvme_admin": false, 00:09:45.245 "nvme_io": false, 00:09:45.245 "nvme_io_md": false, 00:09:45.245 "write_zeroes": true, 00:09:45.245 "zcopy": true, 00:09:45.245 "get_zone_info": false, 00:09:45.245 "zone_management": false, 00:09:45.245 "zone_append": false, 00:09:45.245 "compare": false, 00:09:45.245 "compare_and_write": false, 00:09:45.245 "abort": true, 00:09:45.245 "seek_hole": false, 00:09:45.245 "seek_data": false, 00:09:45.245 "copy": true, 00:09:45.245 "nvme_iov_md": false 00:09:45.245 }, 00:09:45.245 "memory_domains": [ 00:09:45.245 { 00:09:45.245 "dma_device_id": "system", 00:09:45.245 "dma_device_type": 1 00:09:45.245 }, 00:09:45.245 { 00:09:45.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.245 "dma_device_type": 2 00:09:45.245 } 00:09:45.245 ], 00:09:45.245 "driver_specific": {} 00:09:45.245 } 00:09:45.245 ] 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.245 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.246 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.246 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.246 "name": "Existed_Raid", 00:09:45.246 "uuid": "12183856-5ff9-4274-aca1-4396b8d12ef0", 00:09:45.246 "strip_size_kb": 64, 00:09:45.246 "state": "online", 00:09:45.246 "raid_level": "raid0", 00:09:45.246 "superblock": false, 00:09:45.246 "num_base_bdevs": 4, 00:09:45.246 "num_base_bdevs_discovered": 4, 00:09:45.246 "num_base_bdevs_operational": 4, 00:09:45.246 "base_bdevs_list": [ 00:09:45.246 { 00:09:45.246 "name": "NewBaseBdev", 00:09:45.246 "uuid": "9b509cdf-fffc-4b65-842e-da8916244485", 00:09:45.246 "is_configured": true, 00:09:45.246 "data_offset": 0, 00:09:45.246 "data_size": 65536 00:09:45.246 }, 00:09:45.246 { 00:09:45.246 "name": "BaseBdev2", 00:09:45.246 "uuid": "59d0a2af-925f-417e-9c39-2ee935a4c349", 00:09:45.246 "is_configured": true, 00:09:45.246 "data_offset": 0, 00:09:45.246 "data_size": 65536 00:09:45.246 }, 00:09:45.246 { 00:09:45.246 "name": "BaseBdev3", 00:09:45.246 "uuid": "ac99002f-e889-492e-865b-e979a32cbc2b", 00:09:45.246 "is_configured": true, 00:09:45.246 "data_offset": 0, 00:09:45.246 "data_size": 65536 00:09:45.246 }, 00:09:45.246 { 00:09:45.246 "name": "BaseBdev4", 00:09:45.246 "uuid": "0c17eecf-549a-49b9-8d2e-23ff231d20dd", 00:09:45.246 "is_configured": true, 00:09:45.246 "data_offset": 0, 00:09:45.246 "data_size": 65536 00:09:45.246 } 00:09:45.246 ] 00:09:45.246 }' 00:09:45.246 13:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.246 13:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.506 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:45.506 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:45.506 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.506 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.506 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.506 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.506 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:45.506 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.506 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.506 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.506 [2024-11-20 13:23:27.140857] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.506 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.767 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.767 "name": "Existed_Raid", 00:09:45.767 "aliases": [ 00:09:45.767 "12183856-5ff9-4274-aca1-4396b8d12ef0" 00:09:45.767 ], 00:09:45.767 "product_name": "Raid Volume", 00:09:45.767 "block_size": 512, 00:09:45.767 "num_blocks": 262144, 00:09:45.767 "uuid": "12183856-5ff9-4274-aca1-4396b8d12ef0", 00:09:45.767 "assigned_rate_limits": { 00:09:45.767 "rw_ios_per_sec": 0, 00:09:45.767 "rw_mbytes_per_sec": 0, 00:09:45.767 "r_mbytes_per_sec": 0, 00:09:45.767 "w_mbytes_per_sec": 0 00:09:45.767 }, 00:09:45.768 "claimed": false, 00:09:45.768 "zoned": false, 00:09:45.768 "supported_io_types": { 00:09:45.768 "read": true, 00:09:45.768 "write": true, 00:09:45.768 "unmap": true, 00:09:45.768 "flush": true, 00:09:45.768 "reset": true, 00:09:45.768 "nvme_admin": false, 00:09:45.768 "nvme_io": false, 00:09:45.768 "nvme_io_md": false, 00:09:45.768 "write_zeroes": true, 00:09:45.768 "zcopy": false, 00:09:45.768 "get_zone_info": false, 00:09:45.768 "zone_management": false, 00:09:45.768 "zone_append": false, 00:09:45.768 "compare": false, 00:09:45.768 "compare_and_write": false, 00:09:45.768 "abort": false, 00:09:45.768 "seek_hole": false, 00:09:45.768 "seek_data": false, 00:09:45.768 "copy": false, 00:09:45.768 "nvme_iov_md": false 00:09:45.768 }, 00:09:45.768 "memory_domains": [ 00:09:45.768 { 00:09:45.768 "dma_device_id": "system", 00:09:45.768 "dma_device_type": 1 00:09:45.768 }, 00:09:45.768 { 00:09:45.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.768 "dma_device_type": 2 00:09:45.768 }, 00:09:45.768 { 00:09:45.768 "dma_device_id": "system", 00:09:45.768 "dma_device_type": 1 00:09:45.768 }, 00:09:45.768 { 00:09:45.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.768 "dma_device_type": 2 00:09:45.768 }, 00:09:45.768 { 00:09:45.768 "dma_device_id": "system", 00:09:45.768 "dma_device_type": 1 00:09:45.768 }, 00:09:45.768 { 00:09:45.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.768 "dma_device_type": 2 00:09:45.768 }, 00:09:45.768 { 00:09:45.768 "dma_device_id": "system", 00:09:45.768 "dma_device_type": 1 00:09:45.768 }, 00:09:45.768 { 00:09:45.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.768 "dma_device_type": 2 00:09:45.768 } 00:09:45.768 ], 00:09:45.768 "driver_specific": { 00:09:45.768 "raid": { 00:09:45.768 "uuid": "12183856-5ff9-4274-aca1-4396b8d12ef0", 00:09:45.768 "strip_size_kb": 64, 00:09:45.768 "state": "online", 00:09:45.768 "raid_level": "raid0", 00:09:45.768 "superblock": false, 00:09:45.768 "num_base_bdevs": 4, 00:09:45.768 "num_base_bdevs_discovered": 4, 00:09:45.768 "num_base_bdevs_operational": 4, 00:09:45.768 "base_bdevs_list": [ 00:09:45.768 { 00:09:45.768 "name": "NewBaseBdev", 00:09:45.768 "uuid": "9b509cdf-fffc-4b65-842e-da8916244485", 00:09:45.768 "is_configured": true, 00:09:45.768 "data_offset": 0, 00:09:45.768 "data_size": 65536 00:09:45.768 }, 00:09:45.768 { 00:09:45.768 "name": "BaseBdev2", 00:09:45.768 "uuid": "59d0a2af-925f-417e-9c39-2ee935a4c349", 00:09:45.768 "is_configured": true, 00:09:45.768 "data_offset": 0, 00:09:45.768 "data_size": 65536 00:09:45.768 }, 00:09:45.768 { 00:09:45.768 "name": "BaseBdev3", 00:09:45.768 "uuid": "ac99002f-e889-492e-865b-e979a32cbc2b", 00:09:45.768 "is_configured": true, 00:09:45.768 "data_offset": 0, 00:09:45.768 "data_size": 65536 00:09:45.768 }, 00:09:45.768 { 00:09:45.768 "name": "BaseBdev4", 00:09:45.768 "uuid": "0c17eecf-549a-49b9-8d2e-23ff231d20dd", 00:09:45.768 "is_configured": true, 00:09:45.768 "data_offset": 0, 00:09:45.768 "data_size": 65536 00:09:45.768 } 00:09:45.768 ] 00:09:45.768 } 00:09:45.768 } 00:09:45.768 }' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:45.768 BaseBdev2 00:09:45.768 BaseBdev3 00:09:45.768 BaseBdev4' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.768 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.028 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.028 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.028 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.028 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.028 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.028 [2024-11-20 13:23:27.471963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.029 [2024-11-20 13:23:27.472076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.029 [2024-11-20 13:23:27.472211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.029 [2024-11-20 13:23:27.472328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.029 [2024-11-20 13:23:27.472389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80057 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80057 ']' 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80057 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80057 00:09:46.029 killing process with pid 80057 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80057' 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 80057 00:09:46.029 [2024-11-20 13:23:27.514524] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.029 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 80057 00:09:46.029 [2024-11-20 13:23:27.555938] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:46.289 00:09:46.289 real 0m9.404s 00:09:46.289 user 0m16.060s 00:09:46.289 sys 0m1.987s 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.289 ************************************ 00:09:46.289 END TEST raid_state_function_test 00:09:46.289 ************************************ 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.289 13:23:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:46.289 13:23:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:46.289 13:23:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.289 13:23:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.289 ************************************ 00:09:46.289 START TEST raid_state_function_test_sb 00:09:46.289 ************************************ 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:46.289 Process raid pid: 80701 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80701 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80701' 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80701 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80701 ']' 00:09:46.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.289 13:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.289 [2024-11-20 13:23:27.934284] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:46.289 [2024-11-20 13:23:27.934482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.550 [2024-11-20 13:23:28.093969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.550 [2024-11-20 13:23:28.122304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.550 [2024-11-20 13:23:28.167086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.550 [2024-11-20 13:23:28.167145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.118 13:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.118 13:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:47.118 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:47.118 13:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.118 13:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.118 [2024-11-20 13:23:28.777929] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.118 [2024-11-20 13:23:28.778081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.118 [2024-11-20 13:23:28.778131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.118 [2024-11-20 13:23:28.778162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.119 [2024-11-20 13:23:28.778225] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.119 [2024-11-20 13:23:28.778265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.119 [2024-11-20 13:23:28.778299] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:47.119 [2024-11-20 13:23:28.778339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:47.119 13:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.119 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.119 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.119 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.119 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.119 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.119 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.119 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.119 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.378 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.378 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.378 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.378 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.378 13:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.378 13:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.378 13:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.378 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.378 "name": "Existed_Raid", 00:09:47.378 "uuid": "c7edcd65-8fec-4c22-911d-51e5aa47f3ff", 00:09:47.378 "strip_size_kb": 64, 00:09:47.378 "state": "configuring", 00:09:47.378 "raid_level": "raid0", 00:09:47.378 "superblock": true, 00:09:47.378 "num_base_bdevs": 4, 00:09:47.378 "num_base_bdevs_discovered": 0, 00:09:47.378 "num_base_bdevs_operational": 4, 00:09:47.378 "base_bdevs_list": [ 00:09:47.378 { 00:09:47.378 "name": "BaseBdev1", 00:09:47.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.378 "is_configured": false, 00:09:47.378 "data_offset": 0, 00:09:47.378 "data_size": 0 00:09:47.378 }, 00:09:47.378 { 00:09:47.378 "name": "BaseBdev2", 00:09:47.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.378 "is_configured": false, 00:09:47.378 "data_offset": 0, 00:09:47.378 "data_size": 0 00:09:47.378 }, 00:09:47.378 { 00:09:47.378 "name": "BaseBdev3", 00:09:47.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.378 "is_configured": false, 00:09:47.378 "data_offset": 0, 00:09:47.378 "data_size": 0 00:09:47.378 }, 00:09:47.378 { 00:09:47.378 "name": "BaseBdev4", 00:09:47.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.378 "is_configured": false, 00:09:47.378 "data_offset": 0, 00:09:47.378 "data_size": 0 00:09:47.378 } 00:09:47.378 ] 00:09:47.378 }' 00:09:47.378 13:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.378 13:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.638 [2024-11-20 13:23:29.213096] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.638 [2024-11-20 13:23:29.213195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.638 [2024-11-20 13:23:29.225097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.638 [2024-11-20 13:23:29.225191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.638 [2024-11-20 13:23:29.225225] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.638 [2024-11-20 13:23:29.225255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.638 [2024-11-20 13:23:29.225278] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.638 [2024-11-20 13:23:29.225306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.638 [2024-11-20 13:23:29.225330] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:47.638 [2024-11-20 13:23:29.225398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.638 [2024-11-20 13:23:29.246441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.638 BaseBdev1 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.638 [ 00:09:47.638 { 00:09:47.638 "name": "BaseBdev1", 00:09:47.638 "aliases": [ 00:09:47.638 "361d3e28-90e6-4901-9152-32b5289dbb8c" 00:09:47.638 ], 00:09:47.638 "product_name": "Malloc disk", 00:09:47.638 "block_size": 512, 00:09:47.638 "num_blocks": 65536, 00:09:47.638 "uuid": "361d3e28-90e6-4901-9152-32b5289dbb8c", 00:09:47.638 "assigned_rate_limits": { 00:09:47.638 "rw_ios_per_sec": 0, 00:09:47.638 "rw_mbytes_per_sec": 0, 00:09:47.638 "r_mbytes_per_sec": 0, 00:09:47.638 "w_mbytes_per_sec": 0 00:09:47.638 }, 00:09:47.638 "claimed": true, 00:09:47.638 "claim_type": "exclusive_write", 00:09:47.638 "zoned": false, 00:09:47.638 "supported_io_types": { 00:09:47.638 "read": true, 00:09:47.638 "write": true, 00:09:47.638 "unmap": true, 00:09:47.638 "flush": true, 00:09:47.638 "reset": true, 00:09:47.638 "nvme_admin": false, 00:09:47.638 "nvme_io": false, 00:09:47.638 "nvme_io_md": false, 00:09:47.638 "write_zeroes": true, 00:09:47.638 "zcopy": true, 00:09:47.638 "get_zone_info": false, 00:09:47.638 "zone_management": false, 00:09:47.638 "zone_append": false, 00:09:47.638 "compare": false, 00:09:47.638 "compare_and_write": false, 00:09:47.638 "abort": true, 00:09:47.638 "seek_hole": false, 00:09:47.638 "seek_data": false, 00:09:47.638 "copy": true, 00:09:47.638 "nvme_iov_md": false 00:09:47.638 }, 00:09:47.638 "memory_domains": [ 00:09:47.638 { 00:09:47.638 "dma_device_id": "system", 00:09:47.638 "dma_device_type": 1 00:09:47.638 }, 00:09:47.638 { 00:09:47.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.638 "dma_device_type": 2 00:09:47.638 } 00:09:47.638 ], 00:09:47.638 "driver_specific": {} 00:09:47.638 } 00:09:47.638 ] 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.638 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.639 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.898 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.898 "name": "Existed_Raid", 00:09:47.898 "uuid": "326a290f-a2cb-4802-a428-6ad5096bda74", 00:09:47.898 "strip_size_kb": 64, 00:09:47.898 "state": "configuring", 00:09:47.898 "raid_level": "raid0", 00:09:47.898 "superblock": true, 00:09:47.898 "num_base_bdevs": 4, 00:09:47.898 "num_base_bdevs_discovered": 1, 00:09:47.898 "num_base_bdevs_operational": 4, 00:09:47.898 "base_bdevs_list": [ 00:09:47.898 { 00:09:47.898 "name": "BaseBdev1", 00:09:47.898 "uuid": "361d3e28-90e6-4901-9152-32b5289dbb8c", 00:09:47.898 "is_configured": true, 00:09:47.898 "data_offset": 2048, 00:09:47.898 "data_size": 63488 00:09:47.898 }, 00:09:47.898 { 00:09:47.898 "name": "BaseBdev2", 00:09:47.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.898 "is_configured": false, 00:09:47.898 "data_offset": 0, 00:09:47.898 "data_size": 0 00:09:47.898 }, 00:09:47.898 { 00:09:47.898 "name": "BaseBdev3", 00:09:47.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.898 "is_configured": false, 00:09:47.898 "data_offset": 0, 00:09:47.898 "data_size": 0 00:09:47.898 }, 00:09:47.898 { 00:09:47.898 "name": "BaseBdev4", 00:09:47.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.898 "is_configured": false, 00:09:47.898 "data_offset": 0, 00:09:47.898 "data_size": 0 00:09:47.898 } 00:09:47.898 ] 00:09:47.898 }' 00:09:47.898 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.898 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.157 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.158 [2024-11-20 13:23:29.709748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.158 [2024-11-20 13:23:29.709876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.158 [2024-11-20 13:23:29.721757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.158 [2024-11-20 13:23:29.723764] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.158 [2024-11-20 13:23:29.723854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.158 [2024-11-20 13:23:29.723886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:48.158 [2024-11-20 13:23:29.723914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:48.158 [2024-11-20 13:23:29.723937] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:48.158 [2024-11-20 13:23:29.723962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.158 "name": "Existed_Raid", 00:09:48.158 "uuid": "da9e929c-8418-4cc4-bfc2-edcd3e6aa783", 00:09:48.158 "strip_size_kb": 64, 00:09:48.158 "state": "configuring", 00:09:48.158 "raid_level": "raid0", 00:09:48.158 "superblock": true, 00:09:48.158 "num_base_bdevs": 4, 00:09:48.158 "num_base_bdevs_discovered": 1, 00:09:48.158 "num_base_bdevs_operational": 4, 00:09:48.158 "base_bdevs_list": [ 00:09:48.158 { 00:09:48.158 "name": "BaseBdev1", 00:09:48.158 "uuid": "361d3e28-90e6-4901-9152-32b5289dbb8c", 00:09:48.158 "is_configured": true, 00:09:48.158 "data_offset": 2048, 00:09:48.158 "data_size": 63488 00:09:48.158 }, 00:09:48.158 { 00:09:48.158 "name": "BaseBdev2", 00:09:48.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.158 "is_configured": false, 00:09:48.158 "data_offset": 0, 00:09:48.158 "data_size": 0 00:09:48.158 }, 00:09:48.158 { 00:09:48.158 "name": "BaseBdev3", 00:09:48.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.158 "is_configured": false, 00:09:48.158 "data_offset": 0, 00:09:48.158 "data_size": 0 00:09:48.158 }, 00:09:48.158 { 00:09:48.158 "name": "BaseBdev4", 00:09:48.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.158 "is_configured": false, 00:09:48.158 "data_offset": 0, 00:09:48.158 "data_size": 0 00:09:48.158 } 00:09:48.158 ] 00:09:48.158 }' 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.158 13:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.735 [2024-11-20 13:23:30.160235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.735 BaseBdev2 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.735 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.735 [ 00:09:48.735 { 00:09:48.735 "name": "BaseBdev2", 00:09:48.735 "aliases": [ 00:09:48.735 "83bef9cb-88e0-467a-a56a-22c60a869dc9" 00:09:48.735 ], 00:09:48.735 "product_name": "Malloc disk", 00:09:48.735 "block_size": 512, 00:09:48.735 "num_blocks": 65536, 00:09:48.736 "uuid": "83bef9cb-88e0-467a-a56a-22c60a869dc9", 00:09:48.736 "assigned_rate_limits": { 00:09:48.736 "rw_ios_per_sec": 0, 00:09:48.736 "rw_mbytes_per_sec": 0, 00:09:48.736 "r_mbytes_per_sec": 0, 00:09:48.736 "w_mbytes_per_sec": 0 00:09:48.736 }, 00:09:48.736 "claimed": true, 00:09:48.736 "claim_type": "exclusive_write", 00:09:48.736 "zoned": false, 00:09:48.736 "supported_io_types": { 00:09:48.736 "read": true, 00:09:48.736 "write": true, 00:09:48.736 "unmap": true, 00:09:48.736 "flush": true, 00:09:48.736 "reset": true, 00:09:48.736 "nvme_admin": false, 00:09:48.736 "nvme_io": false, 00:09:48.736 "nvme_io_md": false, 00:09:48.736 "write_zeroes": true, 00:09:48.736 "zcopy": true, 00:09:48.736 "get_zone_info": false, 00:09:48.736 "zone_management": false, 00:09:48.736 "zone_append": false, 00:09:48.736 "compare": false, 00:09:48.736 "compare_and_write": false, 00:09:48.736 "abort": true, 00:09:48.736 "seek_hole": false, 00:09:48.736 "seek_data": false, 00:09:48.736 "copy": true, 00:09:48.736 "nvme_iov_md": false 00:09:48.736 }, 00:09:48.736 "memory_domains": [ 00:09:48.736 { 00:09:48.736 "dma_device_id": "system", 00:09:48.736 "dma_device_type": 1 00:09:48.736 }, 00:09:48.736 { 00:09:48.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.736 "dma_device_type": 2 00:09:48.736 } 00:09:48.736 ], 00:09:48.736 "driver_specific": {} 00:09:48.736 } 00:09:48.736 ] 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.736 "name": "Existed_Raid", 00:09:48.736 "uuid": "da9e929c-8418-4cc4-bfc2-edcd3e6aa783", 00:09:48.736 "strip_size_kb": 64, 00:09:48.736 "state": "configuring", 00:09:48.736 "raid_level": "raid0", 00:09:48.736 "superblock": true, 00:09:48.736 "num_base_bdevs": 4, 00:09:48.736 "num_base_bdevs_discovered": 2, 00:09:48.736 "num_base_bdevs_operational": 4, 00:09:48.736 "base_bdevs_list": [ 00:09:48.736 { 00:09:48.736 "name": "BaseBdev1", 00:09:48.736 "uuid": "361d3e28-90e6-4901-9152-32b5289dbb8c", 00:09:48.736 "is_configured": true, 00:09:48.736 "data_offset": 2048, 00:09:48.736 "data_size": 63488 00:09:48.736 }, 00:09:48.736 { 00:09:48.736 "name": "BaseBdev2", 00:09:48.736 "uuid": "83bef9cb-88e0-467a-a56a-22c60a869dc9", 00:09:48.736 "is_configured": true, 00:09:48.736 "data_offset": 2048, 00:09:48.736 "data_size": 63488 00:09:48.736 }, 00:09:48.736 { 00:09:48.736 "name": "BaseBdev3", 00:09:48.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.736 "is_configured": false, 00:09:48.736 "data_offset": 0, 00:09:48.736 "data_size": 0 00:09:48.736 }, 00:09:48.736 { 00:09:48.736 "name": "BaseBdev4", 00:09:48.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.736 "is_configured": false, 00:09:48.736 "data_offset": 0, 00:09:48.736 "data_size": 0 00:09:48.736 } 00:09:48.736 ] 00:09:48.736 }' 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.736 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.012 [2024-11-20 13:23:30.595537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.012 BaseBdev3 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.012 [ 00:09:49.012 { 00:09:49.012 "name": "BaseBdev3", 00:09:49.012 "aliases": [ 00:09:49.012 "256afe8f-5169-4ef4-a594-f6051ae68664" 00:09:49.012 ], 00:09:49.012 "product_name": "Malloc disk", 00:09:49.012 "block_size": 512, 00:09:49.012 "num_blocks": 65536, 00:09:49.012 "uuid": "256afe8f-5169-4ef4-a594-f6051ae68664", 00:09:49.012 "assigned_rate_limits": { 00:09:49.012 "rw_ios_per_sec": 0, 00:09:49.012 "rw_mbytes_per_sec": 0, 00:09:49.012 "r_mbytes_per_sec": 0, 00:09:49.012 "w_mbytes_per_sec": 0 00:09:49.012 }, 00:09:49.012 "claimed": true, 00:09:49.012 "claim_type": "exclusive_write", 00:09:49.012 "zoned": false, 00:09:49.012 "supported_io_types": { 00:09:49.012 "read": true, 00:09:49.012 "write": true, 00:09:49.012 "unmap": true, 00:09:49.012 "flush": true, 00:09:49.012 "reset": true, 00:09:49.012 "nvme_admin": false, 00:09:49.012 "nvme_io": false, 00:09:49.012 "nvme_io_md": false, 00:09:49.012 "write_zeroes": true, 00:09:49.012 "zcopy": true, 00:09:49.012 "get_zone_info": false, 00:09:49.012 "zone_management": false, 00:09:49.012 "zone_append": false, 00:09:49.012 "compare": false, 00:09:49.012 "compare_and_write": false, 00:09:49.012 "abort": true, 00:09:49.012 "seek_hole": false, 00:09:49.012 "seek_data": false, 00:09:49.012 "copy": true, 00:09:49.012 "nvme_iov_md": false 00:09:49.012 }, 00:09:49.012 "memory_domains": [ 00:09:49.012 { 00:09:49.012 "dma_device_id": "system", 00:09:49.012 "dma_device_type": 1 00:09:49.012 }, 00:09:49.012 { 00:09:49.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.012 "dma_device_type": 2 00:09:49.012 } 00:09:49.012 ], 00:09:49.012 "driver_specific": {} 00:09:49.012 } 00:09:49.012 ] 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.012 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.272 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.272 "name": "Existed_Raid", 00:09:49.272 "uuid": "da9e929c-8418-4cc4-bfc2-edcd3e6aa783", 00:09:49.272 "strip_size_kb": 64, 00:09:49.272 "state": "configuring", 00:09:49.272 "raid_level": "raid0", 00:09:49.272 "superblock": true, 00:09:49.272 "num_base_bdevs": 4, 00:09:49.272 "num_base_bdevs_discovered": 3, 00:09:49.272 "num_base_bdevs_operational": 4, 00:09:49.272 "base_bdevs_list": [ 00:09:49.272 { 00:09:49.272 "name": "BaseBdev1", 00:09:49.272 "uuid": "361d3e28-90e6-4901-9152-32b5289dbb8c", 00:09:49.272 "is_configured": true, 00:09:49.272 "data_offset": 2048, 00:09:49.272 "data_size": 63488 00:09:49.272 }, 00:09:49.272 { 00:09:49.272 "name": "BaseBdev2", 00:09:49.272 "uuid": "83bef9cb-88e0-467a-a56a-22c60a869dc9", 00:09:49.272 "is_configured": true, 00:09:49.272 "data_offset": 2048, 00:09:49.272 "data_size": 63488 00:09:49.272 }, 00:09:49.272 { 00:09:49.272 "name": "BaseBdev3", 00:09:49.272 "uuid": "256afe8f-5169-4ef4-a594-f6051ae68664", 00:09:49.272 "is_configured": true, 00:09:49.272 "data_offset": 2048, 00:09:49.272 "data_size": 63488 00:09:49.272 }, 00:09:49.272 { 00:09:49.272 "name": "BaseBdev4", 00:09:49.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.272 "is_configured": false, 00:09:49.272 "data_offset": 0, 00:09:49.272 "data_size": 0 00:09:49.272 } 00:09:49.272 ] 00:09:49.272 }' 00:09:49.272 13:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.272 13:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.532 [2024-11-20 13:23:31.078344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:49.532 [2024-11-20 13:23:31.078691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:49.532 [2024-11-20 13:23:31.078754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:49.532 BaseBdev4 00:09:49.532 [2024-11-20 13:23:31.079119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:49.532 [2024-11-20 13:23:31.079307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:49.532 [2024-11-20 13:23:31.079332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:49.532 [2024-11-20 13:23:31.079486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.532 [ 00:09:49.532 { 00:09:49.532 "name": "BaseBdev4", 00:09:49.532 "aliases": [ 00:09:49.532 "ca7a455a-0a96-41cf-bcbe-1c4406935a1b" 00:09:49.532 ], 00:09:49.532 "product_name": "Malloc disk", 00:09:49.532 "block_size": 512, 00:09:49.532 "num_blocks": 65536, 00:09:49.532 "uuid": "ca7a455a-0a96-41cf-bcbe-1c4406935a1b", 00:09:49.532 "assigned_rate_limits": { 00:09:49.532 "rw_ios_per_sec": 0, 00:09:49.532 "rw_mbytes_per_sec": 0, 00:09:49.532 "r_mbytes_per_sec": 0, 00:09:49.532 "w_mbytes_per_sec": 0 00:09:49.532 }, 00:09:49.532 "claimed": true, 00:09:49.532 "claim_type": "exclusive_write", 00:09:49.532 "zoned": false, 00:09:49.532 "supported_io_types": { 00:09:49.532 "read": true, 00:09:49.532 "write": true, 00:09:49.532 "unmap": true, 00:09:49.532 "flush": true, 00:09:49.532 "reset": true, 00:09:49.532 "nvme_admin": false, 00:09:49.532 "nvme_io": false, 00:09:49.532 "nvme_io_md": false, 00:09:49.532 "write_zeroes": true, 00:09:49.532 "zcopy": true, 00:09:49.532 "get_zone_info": false, 00:09:49.532 "zone_management": false, 00:09:49.532 "zone_append": false, 00:09:49.532 "compare": false, 00:09:49.532 "compare_and_write": false, 00:09:49.532 "abort": true, 00:09:49.532 "seek_hole": false, 00:09:49.532 "seek_data": false, 00:09:49.532 "copy": true, 00:09:49.532 "nvme_iov_md": false 00:09:49.532 }, 00:09:49.532 "memory_domains": [ 00:09:49.532 { 00:09:49.532 "dma_device_id": "system", 00:09:49.532 "dma_device_type": 1 00:09:49.532 }, 00:09:49.532 { 00:09:49.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.532 "dma_device_type": 2 00:09:49.532 } 00:09:49.532 ], 00:09:49.532 "driver_specific": {} 00:09:49.532 } 00:09:49.532 ] 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.532 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.533 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.533 "name": "Existed_Raid", 00:09:49.533 "uuid": "da9e929c-8418-4cc4-bfc2-edcd3e6aa783", 00:09:49.533 "strip_size_kb": 64, 00:09:49.533 "state": "online", 00:09:49.533 "raid_level": "raid0", 00:09:49.533 "superblock": true, 00:09:49.533 "num_base_bdevs": 4, 00:09:49.533 "num_base_bdevs_discovered": 4, 00:09:49.533 "num_base_bdevs_operational": 4, 00:09:49.533 "base_bdevs_list": [ 00:09:49.533 { 00:09:49.533 "name": "BaseBdev1", 00:09:49.533 "uuid": "361d3e28-90e6-4901-9152-32b5289dbb8c", 00:09:49.533 "is_configured": true, 00:09:49.533 "data_offset": 2048, 00:09:49.533 "data_size": 63488 00:09:49.533 }, 00:09:49.533 { 00:09:49.533 "name": "BaseBdev2", 00:09:49.533 "uuid": "83bef9cb-88e0-467a-a56a-22c60a869dc9", 00:09:49.533 "is_configured": true, 00:09:49.533 "data_offset": 2048, 00:09:49.533 "data_size": 63488 00:09:49.533 }, 00:09:49.533 { 00:09:49.533 "name": "BaseBdev3", 00:09:49.533 "uuid": "256afe8f-5169-4ef4-a594-f6051ae68664", 00:09:49.533 "is_configured": true, 00:09:49.533 "data_offset": 2048, 00:09:49.533 "data_size": 63488 00:09:49.533 }, 00:09:49.533 { 00:09:49.533 "name": "BaseBdev4", 00:09:49.533 "uuid": "ca7a455a-0a96-41cf-bcbe-1c4406935a1b", 00:09:49.533 "is_configured": true, 00:09:49.533 "data_offset": 2048, 00:09:49.533 "data_size": 63488 00:09:49.533 } 00:09:49.533 ] 00:09:49.533 }' 00:09:49.533 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.533 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.101 [2024-11-20 13:23:31.558053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.101 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.101 "name": "Existed_Raid", 00:09:50.101 "aliases": [ 00:09:50.102 "da9e929c-8418-4cc4-bfc2-edcd3e6aa783" 00:09:50.102 ], 00:09:50.102 "product_name": "Raid Volume", 00:09:50.102 "block_size": 512, 00:09:50.102 "num_blocks": 253952, 00:09:50.102 "uuid": "da9e929c-8418-4cc4-bfc2-edcd3e6aa783", 00:09:50.102 "assigned_rate_limits": { 00:09:50.102 "rw_ios_per_sec": 0, 00:09:50.102 "rw_mbytes_per_sec": 0, 00:09:50.102 "r_mbytes_per_sec": 0, 00:09:50.102 "w_mbytes_per_sec": 0 00:09:50.102 }, 00:09:50.102 "claimed": false, 00:09:50.102 "zoned": false, 00:09:50.102 "supported_io_types": { 00:09:50.102 "read": true, 00:09:50.102 "write": true, 00:09:50.102 "unmap": true, 00:09:50.102 "flush": true, 00:09:50.102 "reset": true, 00:09:50.102 "nvme_admin": false, 00:09:50.102 "nvme_io": false, 00:09:50.102 "nvme_io_md": false, 00:09:50.102 "write_zeroes": true, 00:09:50.102 "zcopy": false, 00:09:50.102 "get_zone_info": false, 00:09:50.102 "zone_management": false, 00:09:50.102 "zone_append": false, 00:09:50.102 "compare": false, 00:09:50.102 "compare_and_write": false, 00:09:50.102 "abort": false, 00:09:50.102 "seek_hole": false, 00:09:50.102 "seek_data": false, 00:09:50.102 "copy": false, 00:09:50.102 "nvme_iov_md": false 00:09:50.102 }, 00:09:50.102 "memory_domains": [ 00:09:50.102 { 00:09:50.102 "dma_device_id": "system", 00:09:50.102 "dma_device_type": 1 00:09:50.102 }, 00:09:50.102 { 00:09:50.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.102 "dma_device_type": 2 00:09:50.102 }, 00:09:50.102 { 00:09:50.102 "dma_device_id": "system", 00:09:50.102 "dma_device_type": 1 00:09:50.102 }, 00:09:50.102 { 00:09:50.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.102 "dma_device_type": 2 00:09:50.102 }, 00:09:50.102 { 00:09:50.102 "dma_device_id": "system", 00:09:50.102 "dma_device_type": 1 00:09:50.102 }, 00:09:50.102 { 00:09:50.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.102 "dma_device_type": 2 00:09:50.102 }, 00:09:50.102 { 00:09:50.102 "dma_device_id": "system", 00:09:50.102 "dma_device_type": 1 00:09:50.102 }, 00:09:50.102 { 00:09:50.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.102 "dma_device_type": 2 00:09:50.102 } 00:09:50.102 ], 00:09:50.102 "driver_specific": { 00:09:50.102 "raid": { 00:09:50.102 "uuid": "da9e929c-8418-4cc4-bfc2-edcd3e6aa783", 00:09:50.102 "strip_size_kb": 64, 00:09:50.102 "state": "online", 00:09:50.102 "raid_level": "raid0", 00:09:50.102 "superblock": true, 00:09:50.102 "num_base_bdevs": 4, 00:09:50.102 "num_base_bdevs_discovered": 4, 00:09:50.102 "num_base_bdevs_operational": 4, 00:09:50.102 "base_bdevs_list": [ 00:09:50.102 { 00:09:50.102 "name": "BaseBdev1", 00:09:50.102 "uuid": "361d3e28-90e6-4901-9152-32b5289dbb8c", 00:09:50.102 "is_configured": true, 00:09:50.102 "data_offset": 2048, 00:09:50.102 "data_size": 63488 00:09:50.102 }, 00:09:50.102 { 00:09:50.102 "name": "BaseBdev2", 00:09:50.102 "uuid": "83bef9cb-88e0-467a-a56a-22c60a869dc9", 00:09:50.102 "is_configured": true, 00:09:50.102 "data_offset": 2048, 00:09:50.102 "data_size": 63488 00:09:50.102 }, 00:09:50.102 { 00:09:50.102 "name": "BaseBdev3", 00:09:50.102 "uuid": "256afe8f-5169-4ef4-a594-f6051ae68664", 00:09:50.102 "is_configured": true, 00:09:50.102 "data_offset": 2048, 00:09:50.102 "data_size": 63488 00:09:50.102 }, 00:09:50.102 { 00:09:50.102 "name": "BaseBdev4", 00:09:50.102 "uuid": "ca7a455a-0a96-41cf-bcbe-1c4406935a1b", 00:09:50.102 "is_configured": true, 00:09:50.102 "data_offset": 2048, 00:09:50.102 "data_size": 63488 00:09:50.102 } 00:09:50.102 ] 00:09:50.102 } 00:09:50.102 } 00:09:50.102 }' 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:50.102 BaseBdev2 00:09:50.102 BaseBdev3 00:09:50.102 BaseBdev4' 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.362 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.363 [2024-11-20 13:23:31.905128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.363 [2024-11-20 13:23:31.905212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.363 [2024-11-20 13:23:31.905301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.363 "name": "Existed_Raid", 00:09:50.363 "uuid": "da9e929c-8418-4cc4-bfc2-edcd3e6aa783", 00:09:50.363 "strip_size_kb": 64, 00:09:50.363 "state": "offline", 00:09:50.363 "raid_level": "raid0", 00:09:50.363 "superblock": true, 00:09:50.363 "num_base_bdevs": 4, 00:09:50.363 "num_base_bdevs_discovered": 3, 00:09:50.363 "num_base_bdevs_operational": 3, 00:09:50.363 "base_bdevs_list": [ 00:09:50.363 { 00:09:50.363 "name": null, 00:09:50.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.363 "is_configured": false, 00:09:50.363 "data_offset": 0, 00:09:50.363 "data_size": 63488 00:09:50.363 }, 00:09:50.363 { 00:09:50.363 "name": "BaseBdev2", 00:09:50.363 "uuid": "83bef9cb-88e0-467a-a56a-22c60a869dc9", 00:09:50.363 "is_configured": true, 00:09:50.363 "data_offset": 2048, 00:09:50.363 "data_size": 63488 00:09:50.363 }, 00:09:50.363 { 00:09:50.363 "name": "BaseBdev3", 00:09:50.363 "uuid": "256afe8f-5169-4ef4-a594-f6051ae68664", 00:09:50.363 "is_configured": true, 00:09:50.363 "data_offset": 2048, 00:09:50.363 "data_size": 63488 00:09:50.363 }, 00:09:50.363 { 00:09:50.363 "name": "BaseBdev4", 00:09:50.363 "uuid": "ca7a455a-0a96-41cf-bcbe-1c4406935a1b", 00:09:50.363 "is_configured": true, 00:09:50.363 "data_offset": 2048, 00:09:50.363 "data_size": 63488 00:09:50.363 } 00:09:50.363 ] 00:09:50.363 }' 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.363 13:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.932 [2024-11-20 13:23:32.403852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.932 [2024-11-20 13:23:32.475513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.932 [2024-11-20 13:23:32.542866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:50.932 [2024-11-20 13:23:32.542989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.932 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.192 BaseBdev2 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.192 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.192 [ 00:09:51.192 { 00:09:51.192 "name": "BaseBdev2", 00:09:51.192 "aliases": [ 00:09:51.193 "fb96c5f5-f1bf-4c95-be0e-5c104f3adca0" 00:09:51.193 ], 00:09:51.193 "product_name": "Malloc disk", 00:09:51.193 "block_size": 512, 00:09:51.193 "num_blocks": 65536, 00:09:51.193 "uuid": "fb96c5f5-f1bf-4c95-be0e-5c104f3adca0", 00:09:51.193 "assigned_rate_limits": { 00:09:51.193 "rw_ios_per_sec": 0, 00:09:51.193 "rw_mbytes_per_sec": 0, 00:09:51.193 "r_mbytes_per_sec": 0, 00:09:51.193 "w_mbytes_per_sec": 0 00:09:51.193 }, 00:09:51.193 "claimed": false, 00:09:51.193 "zoned": false, 00:09:51.193 "supported_io_types": { 00:09:51.193 "read": true, 00:09:51.193 "write": true, 00:09:51.193 "unmap": true, 00:09:51.193 "flush": true, 00:09:51.193 "reset": true, 00:09:51.193 "nvme_admin": false, 00:09:51.193 "nvme_io": false, 00:09:51.193 "nvme_io_md": false, 00:09:51.193 "write_zeroes": true, 00:09:51.193 "zcopy": true, 00:09:51.193 "get_zone_info": false, 00:09:51.193 "zone_management": false, 00:09:51.193 "zone_append": false, 00:09:51.193 "compare": false, 00:09:51.193 "compare_and_write": false, 00:09:51.193 "abort": true, 00:09:51.193 "seek_hole": false, 00:09:51.193 "seek_data": false, 00:09:51.193 "copy": true, 00:09:51.193 "nvme_iov_md": false 00:09:51.193 }, 00:09:51.193 "memory_domains": [ 00:09:51.193 { 00:09:51.193 "dma_device_id": "system", 00:09:51.193 "dma_device_type": 1 00:09:51.193 }, 00:09:51.193 { 00:09:51.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.193 "dma_device_type": 2 00:09:51.193 } 00:09:51.193 ], 00:09:51.193 "driver_specific": {} 00:09:51.193 } 00:09:51.193 ] 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.193 BaseBdev3 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.193 [ 00:09:51.193 { 00:09:51.193 "name": "BaseBdev3", 00:09:51.193 "aliases": [ 00:09:51.193 "1a169145-9a16-4890-a338-a1e40e37508b" 00:09:51.193 ], 00:09:51.193 "product_name": "Malloc disk", 00:09:51.193 "block_size": 512, 00:09:51.193 "num_blocks": 65536, 00:09:51.193 "uuid": "1a169145-9a16-4890-a338-a1e40e37508b", 00:09:51.193 "assigned_rate_limits": { 00:09:51.193 "rw_ios_per_sec": 0, 00:09:51.193 "rw_mbytes_per_sec": 0, 00:09:51.193 "r_mbytes_per_sec": 0, 00:09:51.193 "w_mbytes_per_sec": 0 00:09:51.193 }, 00:09:51.193 "claimed": false, 00:09:51.193 "zoned": false, 00:09:51.193 "supported_io_types": { 00:09:51.193 "read": true, 00:09:51.193 "write": true, 00:09:51.193 "unmap": true, 00:09:51.193 "flush": true, 00:09:51.193 "reset": true, 00:09:51.193 "nvme_admin": false, 00:09:51.193 "nvme_io": false, 00:09:51.193 "nvme_io_md": false, 00:09:51.193 "write_zeroes": true, 00:09:51.193 "zcopy": true, 00:09:51.193 "get_zone_info": false, 00:09:51.193 "zone_management": false, 00:09:51.193 "zone_append": false, 00:09:51.193 "compare": false, 00:09:51.193 "compare_and_write": false, 00:09:51.193 "abort": true, 00:09:51.193 "seek_hole": false, 00:09:51.193 "seek_data": false, 00:09:51.193 "copy": true, 00:09:51.193 "nvme_iov_md": false 00:09:51.193 }, 00:09:51.193 "memory_domains": [ 00:09:51.193 { 00:09:51.193 "dma_device_id": "system", 00:09:51.193 "dma_device_type": 1 00:09:51.193 }, 00:09:51.193 { 00:09:51.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.193 "dma_device_type": 2 00:09:51.193 } 00:09:51.193 ], 00:09:51.193 "driver_specific": {} 00:09:51.193 } 00:09:51.193 ] 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.193 BaseBdev4 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.193 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.193 [ 00:09:51.193 { 00:09:51.193 "name": "BaseBdev4", 00:09:51.193 "aliases": [ 00:09:51.193 "de17f25e-3d56-46d4-abe1-88bf4417515e" 00:09:51.193 ], 00:09:51.193 "product_name": "Malloc disk", 00:09:51.193 "block_size": 512, 00:09:51.193 "num_blocks": 65536, 00:09:51.193 "uuid": "de17f25e-3d56-46d4-abe1-88bf4417515e", 00:09:51.193 "assigned_rate_limits": { 00:09:51.193 "rw_ios_per_sec": 0, 00:09:51.193 "rw_mbytes_per_sec": 0, 00:09:51.193 "r_mbytes_per_sec": 0, 00:09:51.193 "w_mbytes_per_sec": 0 00:09:51.193 }, 00:09:51.193 "claimed": false, 00:09:51.193 "zoned": false, 00:09:51.193 "supported_io_types": { 00:09:51.193 "read": true, 00:09:51.193 "write": true, 00:09:51.193 "unmap": true, 00:09:51.193 "flush": true, 00:09:51.193 "reset": true, 00:09:51.194 "nvme_admin": false, 00:09:51.194 "nvme_io": false, 00:09:51.194 "nvme_io_md": false, 00:09:51.194 "write_zeroes": true, 00:09:51.194 "zcopy": true, 00:09:51.194 "get_zone_info": false, 00:09:51.194 "zone_management": false, 00:09:51.194 "zone_append": false, 00:09:51.194 "compare": false, 00:09:51.194 "compare_and_write": false, 00:09:51.194 "abort": true, 00:09:51.194 "seek_hole": false, 00:09:51.194 "seek_data": false, 00:09:51.194 "copy": true, 00:09:51.194 "nvme_iov_md": false 00:09:51.194 }, 00:09:51.194 "memory_domains": [ 00:09:51.194 { 00:09:51.194 "dma_device_id": "system", 00:09:51.194 "dma_device_type": 1 00:09:51.194 }, 00:09:51.194 { 00:09:51.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.194 "dma_device_type": 2 00:09:51.194 } 00:09:51.194 ], 00:09:51.194 "driver_specific": {} 00:09:51.194 } 00:09:51.194 ] 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.194 [2024-11-20 13:23:32.760173] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.194 [2024-11-20 13:23:32.760228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.194 [2024-11-20 13:23:32.760270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.194 [2024-11-20 13:23:32.762173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.194 [2024-11-20 13:23:32.762229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.194 "name": "Existed_Raid", 00:09:51.194 "uuid": "57bdb932-b7e9-4804-b7a6-78c6df0f73fb", 00:09:51.194 "strip_size_kb": 64, 00:09:51.194 "state": "configuring", 00:09:51.194 "raid_level": "raid0", 00:09:51.194 "superblock": true, 00:09:51.194 "num_base_bdevs": 4, 00:09:51.194 "num_base_bdevs_discovered": 3, 00:09:51.194 "num_base_bdevs_operational": 4, 00:09:51.194 "base_bdevs_list": [ 00:09:51.194 { 00:09:51.194 "name": "BaseBdev1", 00:09:51.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.194 "is_configured": false, 00:09:51.194 "data_offset": 0, 00:09:51.194 "data_size": 0 00:09:51.194 }, 00:09:51.194 { 00:09:51.194 "name": "BaseBdev2", 00:09:51.194 "uuid": "fb96c5f5-f1bf-4c95-be0e-5c104f3adca0", 00:09:51.194 "is_configured": true, 00:09:51.194 "data_offset": 2048, 00:09:51.194 "data_size": 63488 00:09:51.194 }, 00:09:51.194 { 00:09:51.194 "name": "BaseBdev3", 00:09:51.194 "uuid": "1a169145-9a16-4890-a338-a1e40e37508b", 00:09:51.194 "is_configured": true, 00:09:51.194 "data_offset": 2048, 00:09:51.194 "data_size": 63488 00:09:51.194 }, 00:09:51.194 { 00:09:51.194 "name": "BaseBdev4", 00:09:51.194 "uuid": "de17f25e-3d56-46d4-abe1-88bf4417515e", 00:09:51.194 "is_configured": true, 00:09:51.194 "data_offset": 2048, 00:09:51.194 "data_size": 63488 00:09:51.194 } 00:09:51.194 ] 00:09:51.194 }' 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.194 13:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.763 [2024-11-20 13:23:33.215446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.763 "name": "Existed_Raid", 00:09:51.763 "uuid": "57bdb932-b7e9-4804-b7a6-78c6df0f73fb", 00:09:51.763 "strip_size_kb": 64, 00:09:51.763 "state": "configuring", 00:09:51.763 "raid_level": "raid0", 00:09:51.763 "superblock": true, 00:09:51.763 "num_base_bdevs": 4, 00:09:51.763 "num_base_bdevs_discovered": 2, 00:09:51.763 "num_base_bdevs_operational": 4, 00:09:51.763 "base_bdevs_list": [ 00:09:51.763 { 00:09:51.763 "name": "BaseBdev1", 00:09:51.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.763 "is_configured": false, 00:09:51.763 "data_offset": 0, 00:09:51.763 "data_size": 0 00:09:51.763 }, 00:09:51.763 { 00:09:51.763 "name": null, 00:09:51.763 "uuid": "fb96c5f5-f1bf-4c95-be0e-5c104f3adca0", 00:09:51.763 "is_configured": false, 00:09:51.763 "data_offset": 0, 00:09:51.763 "data_size": 63488 00:09:51.763 }, 00:09:51.763 { 00:09:51.763 "name": "BaseBdev3", 00:09:51.763 "uuid": "1a169145-9a16-4890-a338-a1e40e37508b", 00:09:51.763 "is_configured": true, 00:09:51.763 "data_offset": 2048, 00:09:51.763 "data_size": 63488 00:09:51.763 }, 00:09:51.763 { 00:09:51.763 "name": "BaseBdev4", 00:09:51.763 "uuid": "de17f25e-3d56-46d4-abe1-88bf4417515e", 00:09:51.763 "is_configured": true, 00:09:51.763 "data_offset": 2048, 00:09:51.763 "data_size": 63488 00:09:51.763 } 00:09:51.763 ] 00:09:51.763 }' 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.763 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.022 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.022 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.022 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.022 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.022 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.022 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:52.022 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:52.022 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.022 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.281 [2024-11-20 13:23:33.693896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.281 BaseBdev1 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.281 [ 00:09:52.281 { 00:09:52.281 "name": "BaseBdev1", 00:09:52.281 "aliases": [ 00:09:52.281 "f61cca60-da0c-4f97-bf42-89a45a51f585" 00:09:52.281 ], 00:09:52.281 "product_name": "Malloc disk", 00:09:52.281 "block_size": 512, 00:09:52.281 "num_blocks": 65536, 00:09:52.281 "uuid": "f61cca60-da0c-4f97-bf42-89a45a51f585", 00:09:52.281 "assigned_rate_limits": { 00:09:52.281 "rw_ios_per_sec": 0, 00:09:52.281 "rw_mbytes_per_sec": 0, 00:09:52.281 "r_mbytes_per_sec": 0, 00:09:52.281 "w_mbytes_per_sec": 0 00:09:52.281 }, 00:09:52.281 "claimed": true, 00:09:52.281 "claim_type": "exclusive_write", 00:09:52.281 "zoned": false, 00:09:52.281 "supported_io_types": { 00:09:52.281 "read": true, 00:09:52.281 "write": true, 00:09:52.281 "unmap": true, 00:09:52.281 "flush": true, 00:09:52.281 "reset": true, 00:09:52.281 "nvme_admin": false, 00:09:52.281 "nvme_io": false, 00:09:52.281 "nvme_io_md": false, 00:09:52.281 "write_zeroes": true, 00:09:52.281 "zcopy": true, 00:09:52.281 "get_zone_info": false, 00:09:52.281 "zone_management": false, 00:09:52.281 "zone_append": false, 00:09:52.281 "compare": false, 00:09:52.281 "compare_and_write": false, 00:09:52.281 "abort": true, 00:09:52.281 "seek_hole": false, 00:09:52.281 "seek_data": false, 00:09:52.281 "copy": true, 00:09:52.281 "nvme_iov_md": false 00:09:52.281 }, 00:09:52.281 "memory_domains": [ 00:09:52.281 { 00:09:52.281 "dma_device_id": "system", 00:09:52.281 "dma_device_type": 1 00:09:52.281 }, 00:09:52.281 { 00:09:52.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.281 "dma_device_type": 2 00:09:52.281 } 00:09:52.281 ], 00:09:52.281 "driver_specific": {} 00:09:52.281 } 00:09:52.281 ] 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.281 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.281 "name": "Existed_Raid", 00:09:52.281 "uuid": "57bdb932-b7e9-4804-b7a6-78c6df0f73fb", 00:09:52.281 "strip_size_kb": 64, 00:09:52.281 "state": "configuring", 00:09:52.281 "raid_level": "raid0", 00:09:52.281 "superblock": true, 00:09:52.281 "num_base_bdevs": 4, 00:09:52.281 "num_base_bdevs_discovered": 3, 00:09:52.281 "num_base_bdevs_operational": 4, 00:09:52.281 "base_bdevs_list": [ 00:09:52.281 { 00:09:52.281 "name": "BaseBdev1", 00:09:52.281 "uuid": "f61cca60-da0c-4f97-bf42-89a45a51f585", 00:09:52.281 "is_configured": true, 00:09:52.281 "data_offset": 2048, 00:09:52.281 "data_size": 63488 00:09:52.281 }, 00:09:52.281 { 00:09:52.281 "name": null, 00:09:52.281 "uuid": "fb96c5f5-f1bf-4c95-be0e-5c104f3adca0", 00:09:52.281 "is_configured": false, 00:09:52.281 "data_offset": 0, 00:09:52.282 "data_size": 63488 00:09:52.282 }, 00:09:52.282 { 00:09:52.282 "name": "BaseBdev3", 00:09:52.282 "uuid": "1a169145-9a16-4890-a338-a1e40e37508b", 00:09:52.282 "is_configured": true, 00:09:52.282 "data_offset": 2048, 00:09:52.282 "data_size": 63488 00:09:52.282 }, 00:09:52.282 { 00:09:52.282 "name": "BaseBdev4", 00:09:52.282 "uuid": "de17f25e-3d56-46d4-abe1-88bf4417515e", 00:09:52.282 "is_configured": true, 00:09:52.282 "data_offset": 2048, 00:09:52.282 "data_size": 63488 00:09:52.282 } 00:09:52.282 ] 00:09:52.282 }' 00:09:52.282 13:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.282 13:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.541 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.541 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.541 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.541 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.541 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.801 [2024-11-20 13:23:34.233128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.801 "name": "Existed_Raid", 00:09:52.801 "uuid": "57bdb932-b7e9-4804-b7a6-78c6df0f73fb", 00:09:52.801 "strip_size_kb": 64, 00:09:52.801 "state": "configuring", 00:09:52.801 "raid_level": "raid0", 00:09:52.801 "superblock": true, 00:09:52.801 "num_base_bdevs": 4, 00:09:52.801 "num_base_bdevs_discovered": 2, 00:09:52.801 "num_base_bdevs_operational": 4, 00:09:52.801 "base_bdevs_list": [ 00:09:52.801 { 00:09:52.801 "name": "BaseBdev1", 00:09:52.801 "uuid": "f61cca60-da0c-4f97-bf42-89a45a51f585", 00:09:52.801 "is_configured": true, 00:09:52.801 "data_offset": 2048, 00:09:52.801 "data_size": 63488 00:09:52.801 }, 00:09:52.801 { 00:09:52.801 "name": null, 00:09:52.801 "uuid": "fb96c5f5-f1bf-4c95-be0e-5c104f3adca0", 00:09:52.801 "is_configured": false, 00:09:52.801 "data_offset": 0, 00:09:52.801 "data_size": 63488 00:09:52.801 }, 00:09:52.801 { 00:09:52.801 "name": null, 00:09:52.801 "uuid": "1a169145-9a16-4890-a338-a1e40e37508b", 00:09:52.801 "is_configured": false, 00:09:52.801 "data_offset": 0, 00:09:52.801 "data_size": 63488 00:09:52.801 }, 00:09:52.801 { 00:09:52.801 "name": "BaseBdev4", 00:09:52.801 "uuid": "de17f25e-3d56-46d4-abe1-88bf4417515e", 00:09:52.801 "is_configured": true, 00:09:52.801 "data_offset": 2048, 00:09:52.801 "data_size": 63488 00:09:52.801 } 00:09:52.801 ] 00:09:52.801 }' 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.801 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.061 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.061 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:53.061 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.061 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.321 [2024-11-20 13:23:34.764223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.321 "name": "Existed_Raid", 00:09:53.321 "uuid": "57bdb932-b7e9-4804-b7a6-78c6df0f73fb", 00:09:53.321 "strip_size_kb": 64, 00:09:53.321 "state": "configuring", 00:09:53.321 "raid_level": "raid0", 00:09:53.321 "superblock": true, 00:09:53.321 "num_base_bdevs": 4, 00:09:53.321 "num_base_bdevs_discovered": 3, 00:09:53.321 "num_base_bdevs_operational": 4, 00:09:53.321 "base_bdevs_list": [ 00:09:53.321 { 00:09:53.321 "name": "BaseBdev1", 00:09:53.321 "uuid": "f61cca60-da0c-4f97-bf42-89a45a51f585", 00:09:53.321 "is_configured": true, 00:09:53.321 "data_offset": 2048, 00:09:53.321 "data_size": 63488 00:09:53.321 }, 00:09:53.321 { 00:09:53.321 "name": null, 00:09:53.321 "uuid": "fb96c5f5-f1bf-4c95-be0e-5c104f3adca0", 00:09:53.321 "is_configured": false, 00:09:53.321 "data_offset": 0, 00:09:53.321 "data_size": 63488 00:09:53.321 }, 00:09:53.321 { 00:09:53.321 "name": "BaseBdev3", 00:09:53.321 "uuid": "1a169145-9a16-4890-a338-a1e40e37508b", 00:09:53.321 "is_configured": true, 00:09:53.321 "data_offset": 2048, 00:09:53.321 "data_size": 63488 00:09:53.321 }, 00:09:53.321 { 00:09:53.321 "name": "BaseBdev4", 00:09:53.321 "uuid": "de17f25e-3d56-46d4-abe1-88bf4417515e", 00:09:53.321 "is_configured": true, 00:09:53.321 "data_offset": 2048, 00:09:53.321 "data_size": 63488 00:09:53.321 } 00:09:53.321 ] 00:09:53.321 }' 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.321 13:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.581 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:53.581 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.581 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.581 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.581 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.841 [2024-11-20 13:23:35.267429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.841 "name": "Existed_Raid", 00:09:53.841 "uuid": "57bdb932-b7e9-4804-b7a6-78c6df0f73fb", 00:09:53.841 "strip_size_kb": 64, 00:09:53.841 "state": "configuring", 00:09:53.841 "raid_level": "raid0", 00:09:53.841 "superblock": true, 00:09:53.841 "num_base_bdevs": 4, 00:09:53.841 "num_base_bdevs_discovered": 2, 00:09:53.841 "num_base_bdevs_operational": 4, 00:09:53.841 "base_bdevs_list": [ 00:09:53.841 { 00:09:53.841 "name": null, 00:09:53.841 "uuid": "f61cca60-da0c-4f97-bf42-89a45a51f585", 00:09:53.841 "is_configured": false, 00:09:53.841 "data_offset": 0, 00:09:53.841 "data_size": 63488 00:09:53.841 }, 00:09:53.841 { 00:09:53.841 "name": null, 00:09:53.841 "uuid": "fb96c5f5-f1bf-4c95-be0e-5c104f3adca0", 00:09:53.841 "is_configured": false, 00:09:53.841 "data_offset": 0, 00:09:53.841 "data_size": 63488 00:09:53.841 }, 00:09:53.841 { 00:09:53.841 "name": "BaseBdev3", 00:09:53.841 "uuid": "1a169145-9a16-4890-a338-a1e40e37508b", 00:09:53.841 "is_configured": true, 00:09:53.841 "data_offset": 2048, 00:09:53.841 "data_size": 63488 00:09:53.841 }, 00:09:53.841 { 00:09:53.841 "name": "BaseBdev4", 00:09:53.841 "uuid": "de17f25e-3d56-46d4-abe1-88bf4417515e", 00:09:53.841 "is_configured": true, 00:09:53.841 "data_offset": 2048, 00:09:53.841 "data_size": 63488 00:09:53.841 } 00:09:53.841 ] 00:09:53.841 }' 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.841 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.100 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.100 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:54.100 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.100 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.101 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.360 [2024-11-20 13:23:35.773377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.360 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.360 "name": "Existed_Raid", 00:09:54.360 "uuid": "57bdb932-b7e9-4804-b7a6-78c6df0f73fb", 00:09:54.360 "strip_size_kb": 64, 00:09:54.360 "state": "configuring", 00:09:54.360 "raid_level": "raid0", 00:09:54.361 "superblock": true, 00:09:54.361 "num_base_bdevs": 4, 00:09:54.361 "num_base_bdevs_discovered": 3, 00:09:54.361 "num_base_bdevs_operational": 4, 00:09:54.361 "base_bdevs_list": [ 00:09:54.361 { 00:09:54.361 "name": null, 00:09:54.361 "uuid": "f61cca60-da0c-4f97-bf42-89a45a51f585", 00:09:54.361 "is_configured": false, 00:09:54.361 "data_offset": 0, 00:09:54.361 "data_size": 63488 00:09:54.361 }, 00:09:54.361 { 00:09:54.361 "name": "BaseBdev2", 00:09:54.361 "uuid": "fb96c5f5-f1bf-4c95-be0e-5c104f3adca0", 00:09:54.361 "is_configured": true, 00:09:54.361 "data_offset": 2048, 00:09:54.361 "data_size": 63488 00:09:54.361 }, 00:09:54.361 { 00:09:54.361 "name": "BaseBdev3", 00:09:54.361 "uuid": "1a169145-9a16-4890-a338-a1e40e37508b", 00:09:54.361 "is_configured": true, 00:09:54.361 "data_offset": 2048, 00:09:54.361 "data_size": 63488 00:09:54.361 }, 00:09:54.361 { 00:09:54.361 "name": "BaseBdev4", 00:09:54.361 "uuid": "de17f25e-3d56-46d4-abe1-88bf4417515e", 00:09:54.361 "is_configured": true, 00:09:54.361 "data_offset": 2048, 00:09:54.361 "data_size": 63488 00:09:54.361 } 00:09:54.361 ] 00:09:54.361 }' 00:09:54.361 13:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.361 13:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.622 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.622 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.622 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.622 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.622 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.622 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:54.622 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.622 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.622 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.622 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:54.622 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f61cca60-da0c-4f97-bf42-89a45a51f585 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.882 [2024-11-20 13:23:36.335641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:54.882 [2024-11-20 13:23:36.335936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:54.882 [2024-11-20 13:23:36.336004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:54.882 NewBaseBdev 00:09:54.882 [2024-11-20 13:23:36.336306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:54.882 [2024-11-20 13:23:36.336439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:54.882 [2024-11-20 13:23:36.336512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:54.882 [2024-11-20 13:23:36.336668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.882 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.882 [ 00:09:54.882 { 00:09:54.882 "name": "NewBaseBdev", 00:09:54.882 "aliases": [ 00:09:54.882 "f61cca60-da0c-4f97-bf42-89a45a51f585" 00:09:54.882 ], 00:09:54.882 "product_name": "Malloc disk", 00:09:54.882 "block_size": 512, 00:09:54.882 "num_blocks": 65536, 00:09:54.882 "uuid": "f61cca60-da0c-4f97-bf42-89a45a51f585", 00:09:54.882 "assigned_rate_limits": { 00:09:54.882 "rw_ios_per_sec": 0, 00:09:54.882 "rw_mbytes_per_sec": 0, 00:09:54.882 "r_mbytes_per_sec": 0, 00:09:54.882 "w_mbytes_per_sec": 0 00:09:54.882 }, 00:09:54.882 "claimed": true, 00:09:54.882 "claim_type": "exclusive_write", 00:09:54.882 "zoned": false, 00:09:54.882 "supported_io_types": { 00:09:54.882 "read": true, 00:09:54.882 "write": true, 00:09:54.882 "unmap": true, 00:09:54.882 "flush": true, 00:09:54.882 "reset": true, 00:09:54.882 "nvme_admin": false, 00:09:54.882 "nvme_io": false, 00:09:54.882 "nvme_io_md": false, 00:09:54.882 "write_zeroes": true, 00:09:54.882 "zcopy": true, 00:09:54.883 "get_zone_info": false, 00:09:54.883 "zone_management": false, 00:09:54.883 "zone_append": false, 00:09:54.883 "compare": false, 00:09:54.883 "compare_and_write": false, 00:09:54.883 "abort": true, 00:09:54.883 "seek_hole": false, 00:09:54.883 "seek_data": false, 00:09:54.883 "copy": true, 00:09:54.883 "nvme_iov_md": false 00:09:54.883 }, 00:09:54.883 "memory_domains": [ 00:09:54.883 { 00:09:54.883 "dma_device_id": "system", 00:09:54.883 "dma_device_type": 1 00:09:54.883 }, 00:09:54.883 { 00:09:54.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.883 "dma_device_type": 2 00:09:54.883 } 00:09:54.883 ], 00:09:54.883 "driver_specific": {} 00:09:54.883 } 00:09:54.883 ] 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.883 "name": "Existed_Raid", 00:09:54.883 "uuid": "57bdb932-b7e9-4804-b7a6-78c6df0f73fb", 00:09:54.883 "strip_size_kb": 64, 00:09:54.883 "state": "online", 00:09:54.883 "raid_level": "raid0", 00:09:54.883 "superblock": true, 00:09:54.883 "num_base_bdevs": 4, 00:09:54.883 "num_base_bdevs_discovered": 4, 00:09:54.883 "num_base_bdevs_operational": 4, 00:09:54.883 "base_bdevs_list": [ 00:09:54.883 { 00:09:54.883 "name": "NewBaseBdev", 00:09:54.883 "uuid": "f61cca60-da0c-4f97-bf42-89a45a51f585", 00:09:54.883 "is_configured": true, 00:09:54.883 "data_offset": 2048, 00:09:54.883 "data_size": 63488 00:09:54.883 }, 00:09:54.883 { 00:09:54.883 "name": "BaseBdev2", 00:09:54.883 "uuid": "fb96c5f5-f1bf-4c95-be0e-5c104f3adca0", 00:09:54.883 "is_configured": true, 00:09:54.883 "data_offset": 2048, 00:09:54.883 "data_size": 63488 00:09:54.883 }, 00:09:54.883 { 00:09:54.883 "name": "BaseBdev3", 00:09:54.883 "uuid": "1a169145-9a16-4890-a338-a1e40e37508b", 00:09:54.883 "is_configured": true, 00:09:54.883 "data_offset": 2048, 00:09:54.883 "data_size": 63488 00:09:54.883 }, 00:09:54.883 { 00:09:54.883 "name": "BaseBdev4", 00:09:54.883 "uuid": "de17f25e-3d56-46d4-abe1-88bf4417515e", 00:09:54.883 "is_configured": true, 00:09:54.883 "data_offset": 2048, 00:09:54.883 "data_size": 63488 00:09:54.883 } 00:09:54.883 ] 00:09:54.883 }' 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.883 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:55.144 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:55.144 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.144 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.144 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.144 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.144 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:55.144 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.144 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.144 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 [2024-11-20 13:23:36.807381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.404 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.404 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.404 "name": "Existed_Raid", 00:09:55.404 "aliases": [ 00:09:55.404 "57bdb932-b7e9-4804-b7a6-78c6df0f73fb" 00:09:55.404 ], 00:09:55.404 "product_name": "Raid Volume", 00:09:55.404 "block_size": 512, 00:09:55.404 "num_blocks": 253952, 00:09:55.404 "uuid": "57bdb932-b7e9-4804-b7a6-78c6df0f73fb", 00:09:55.404 "assigned_rate_limits": { 00:09:55.404 "rw_ios_per_sec": 0, 00:09:55.404 "rw_mbytes_per_sec": 0, 00:09:55.404 "r_mbytes_per_sec": 0, 00:09:55.404 "w_mbytes_per_sec": 0 00:09:55.404 }, 00:09:55.404 "claimed": false, 00:09:55.404 "zoned": false, 00:09:55.404 "supported_io_types": { 00:09:55.404 "read": true, 00:09:55.404 "write": true, 00:09:55.404 "unmap": true, 00:09:55.404 "flush": true, 00:09:55.404 "reset": true, 00:09:55.404 "nvme_admin": false, 00:09:55.404 "nvme_io": false, 00:09:55.404 "nvme_io_md": false, 00:09:55.404 "write_zeroes": true, 00:09:55.404 "zcopy": false, 00:09:55.404 "get_zone_info": false, 00:09:55.404 "zone_management": false, 00:09:55.404 "zone_append": false, 00:09:55.404 "compare": false, 00:09:55.404 "compare_and_write": false, 00:09:55.404 "abort": false, 00:09:55.404 "seek_hole": false, 00:09:55.404 "seek_data": false, 00:09:55.404 "copy": false, 00:09:55.404 "nvme_iov_md": false 00:09:55.405 }, 00:09:55.405 "memory_domains": [ 00:09:55.405 { 00:09:55.405 "dma_device_id": "system", 00:09:55.405 "dma_device_type": 1 00:09:55.405 }, 00:09:55.405 { 00:09:55.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.405 "dma_device_type": 2 00:09:55.405 }, 00:09:55.405 { 00:09:55.405 "dma_device_id": "system", 00:09:55.405 "dma_device_type": 1 00:09:55.405 }, 00:09:55.405 { 00:09:55.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.405 "dma_device_type": 2 00:09:55.405 }, 00:09:55.405 { 00:09:55.405 "dma_device_id": "system", 00:09:55.405 "dma_device_type": 1 00:09:55.405 }, 00:09:55.405 { 00:09:55.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.405 "dma_device_type": 2 00:09:55.405 }, 00:09:55.405 { 00:09:55.405 "dma_device_id": "system", 00:09:55.405 "dma_device_type": 1 00:09:55.405 }, 00:09:55.405 { 00:09:55.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.405 "dma_device_type": 2 00:09:55.405 } 00:09:55.405 ], 00:09:55.405 "driver_specific": { 00:09:55.405 "raid": { 00:09:55.405 "uuid": "57bdb932-b7e9-4804-b7a6-78c6df0f73fb", 00:09:55.405 "strip_size_kb": 64, 00:09:55.405 "state": "online", 00:09:55.405 "raid_level": "raid0", 00:09:55.405 "superblock": true, 00:09:55.405 "num_base_bdevs": 4, 00:09:55.405 "num_base_bdevs_discovered": 4, 00:09:55.405 "num_base_bdevs_operational": 4, 00:09:55.405 "base_bdevs_list": [ 00:09:55.405 { 00:09:55.405 "name": "NewBaseBdev", 00:09:55.405 "uuid": "f61cca60-da0c-4f97-bf42-89a45a51f585", 00:09:55.405 "is_configured": true, 00:09:55.405 "data_offset": 2048, 00:09:55.405 "data_size": 63488 00:09:55.405 }, 00:09:55.405 { 00:09:55.405 "name": "BaseBdev2", 00:09:55.405 "uuid": "fb96c5f5-f1bf-4c95-be0e-5c104f3adca0", 00:09:55.405 "is_configured": true, 00:09:55.405 "data_offset": 2048, 00:09:55.405 "data_size": 63488 00:09:55.405 }, 00:09:55.405 { 00:09:55.405 "name": "BaseBdev3", 00:09:55.405 "uuid": "1a169145-9a16-4890-a338-a1e40e37508b", 00:09:55.405 "is_configured": true, 00:09:55.405 "data_offset": 2048, 00:09:55.405 "data_size": 63488 00:09:55.405 }, 00:09:55.405 { 00:09:55.405 "name": "BaseBdev4", 00:09:55.405 "uuid": "de17f25e-3d56-46d4-abe1-88bf4417515e", 00:09:55.405 "is_configured": true, 00:09:55.405 "data_offset": 2048, 00:09:55.405 "data_size": 63488 00:09:55.405 } 00:09:55.405 ] 00:09:55.405 } 00:09:55.405 } 00:09:55.405 }' 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:55.405 BaseBdev2 00:09:55.405 BaseBdev3 00:09:55.405 BaseBdev4' 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.405 13:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.405 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.405 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.405 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.405 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.405 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:55.405 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.405 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.405 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.405 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.665 [2024-11-20 13:23:37.146376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.665 [2024-11-20 13:23:37.146462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.665 [2024-11-20 13:23:37.146558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.665 [2024-11-20 13:23:37.146647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.665 [2024-11-20 13:23:37.146658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80701 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80701 ']' 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80701 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80701 00:09:55.665 killing process with pid 80701 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80701' 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80701 00:09:55.665 [2024-11-20 13:23:37.195007] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.665 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80701 00:09:55.665 [2024-11-20 13:23:37.237220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.925 13:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:55.925 00:09:55.925 real 0m9.620s 00:09:55.925 user 0m16.509s 00:09:55.925 sys 0m1.968s 00:09:55.925 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.925 ************************************ 00:09:55.925 END TEST raid_state_function_test_sb 00:09:55.925 ************************************ 00:09:55.925 13:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.925 13:23:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:55.925 13:23:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:55.925 13:23:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.925 13:23:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.925 ************************************ 00:09:55.925 START TEST raid_superblock_test 00:09:55.925 ************************************ 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81355 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81355 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81355 ']' 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.925 13:23:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.184 [2024-11-20 13:23:37.614311] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:56.184 [2024-11-20 13:23:37.614529] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81355 ] 00:09:56.184 [2024-11-20 13:23:37.771672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.184 [2024-11-20 13:23:37.799713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.185 [2024-11-20 13:23:37.844574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.185 [2024-11-20 13:23:37.844717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.124 malloc1 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.124 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.124 [2024-11-20 13:23:38.476630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:57.124 [2024-11-20 13:23:38.476711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.125 [2024-11-20 13:23:38.476735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:57.125 [2024-11-20 13:23:38.476752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.125 [2024-11-20 13:23:38.479104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.125 [2024-11-20 13:23:38.479147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:57.125 pt1 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 malloc2 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 [2024-11-20 13:23:38.509697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:57.125 [2024-11-20 13:23:38.509828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.125 [2024-11-20 13:23:38.509867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:57.125 [2024-11-20 13:23:38.509901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.125 [2024-11-20 13:23:38.512148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.125 [2024-11-20 13:23:38.512233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:57.125 pt2 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 malloc3 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 [2024-11-20 13:23:38.542610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:57.125 [2024-11-20 13:23:38.542743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.125 [2024-11-20 13:23:38.542788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:57.125 [2024-11-20 13:23:38.542831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.125 [2024-11-20 13:23:38.545014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.125 [2024-11-20 13:23:38.545097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:57.125 pt3 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 malloc4 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 [2024-11-20 13:23:38.585953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:57.125 [2024-11-20 13:23:38.586044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.125 [2024-11-20 13:23:38.586068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:57.125 [2024-11-20 13:23:38.586087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.125 [2024-11-20 13:23:38.588499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.125 [2024-11-20 13:23:38.588549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:57.125 pt4 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 [2024-11-20 13:23:38.597929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:57.125 [2024-11-20 13:23:38.599969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:57.125 [2024-11-20 13:23:38.600070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:57.125 [2024-11-20 13:23:38.600130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:57.125 [2024-11-20 13:23:38.600313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:57.125 [2024-11-20 13:23:38.600346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:57.125 [2024-11-20 13:23:38.600637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:57.125 [2024-11-20 13:23:38.600834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:57.125 [2024-11-20 13:23:38.600856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:57.125 [2024-11-20 13:23:38.601025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.125 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.125 "name": "raid_bdev1", 00:09:57.125 "uuid": "8463ba9f-990e-4a4c-b4cc-b37f24e96dda", 00:09:57.125 "strip_size_kb": 64, 00:09:57.125 "state": "online", 00:09:57.125 "raid_level": "raid0", 00:09:57.126 "superblock": true, 00:09:57.126 "num_base_bdevs": 4, 00:09:57.126 "num_base_bdevs_discovered": 4, 00:09:57.126 "num_base_bdevs_operational": 4, 00:09:57.126 "base_bdevs_list": [ 00:09:57.126 { 00:09:57.126 "name": "pt1", 00:09:57.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.126 "is_configured": true, 00:09:57.126 "data_offset": 2048, 00:09:57.126 "data_size": 63488 00:09:57.126 }, 00:09:57.126 { 00:09:57.126 "name": "pt2", 00:09:57.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.126 "is_configured": true, 00:09:57.126 "data_offset": 2048, 00:09:57.126 "data_size": 63488 00:09:57.126 }, 00:09:57.126 { 00:09:57.126 "name": "pt3", 00:09:57.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.126 "is_configured": true, 00:09:57.126 "data_offset": 2048, 00:09:57.126 "data_size": 63488 00:09:57.126 }, 00:09:57.126 { 00:09:57.126 "name": "pt4", 00:09:57.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:57.126 "is_configured": true, 00:09:57.126 "data_offset": 2048, 00:09:57.126 "data_size": 63488 00:09:57.126 } 00:09:57.126 ] 00:09:57.126 }' 00:09:57.126 13:23:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.126 13:23:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.386 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:57.386 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:57.386 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.386 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.386 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.386 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.386 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.386 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:57.386 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.386 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.386 [2024-11-20 13:23:39.049507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.660 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.660 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.661 "name": "raid_bdev1", 00:09:57.661 "aliases": [ 00:09:57.661 "8463ba9f-990e-4a4c-b4cc-b37f24e96dda" 00:09:57.661 ], 00:09:57.661 "product_name": "Raid Volume", 00:09:57.661 "block_size": 512, 00:09:57.661 "num_blocks": 253952, 00:09:57.661 "uuid": "8463ba9f-990e-4a4c-b4cc-b37f24e96dda", 00:09:57.661 "assigned_rate_limits": { 00:09:57.661 "rw_ios_per_sec": 0, 00:09:57.661 "rw_mbytes_per_sec": 0, 00:09:57.661 "r_mbytes_per_sec": 0, 00:09:57.661 "w_mbytes_per_sec": 0 00:09:57.661 }, 00:09:57.661 "claimed": false, 00:09:57.661 "zoned": false, 00:09:57.661 "supported_io_types": { 00:09:57.661 "read": true, 00:09:57.661 "write": true, 00:09:57.661 "unmap": true, 00:09:57.661 "flush": true, 00:09:57.661 "reset": true, 00:09:57.661 "nvme_admin": false, 00:09:57.661 "nvme_io": false, 00:09:57.661 "nvme_io_md": false, 00:09:57.661 "write_zeroes": true, 00:09:57.661 "zcopy": false, 00:09:57.661 "get_zone_info": false, 00:09:57.661 "zone_management": false, 00:09:57.661 "zone_append": false, 00:09:57.661 "compare": false, 00:09:57.661 "compare_and_write": false, 00:09:57.661 "abort": false, 00:09:57.661 "seek_hole": false, 00:09:57.661 "seek_data": false, 00:09:57.661 "copy": false, 00:09:57.661 "nvme_iov_md": false 00:09:57.661 }, 00:09:57.661 "memory_domains": [ 00:09:57.661 { 00:09:57.661 "dma_device_id": "system", 00:09:57.661 "dma_device_type": 1 00:09:57.661 }, 00:09:57.661 { 00:09:57.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.661 "dma_device_type": 2 00:09:57.661 }, 00:09:57.661 { 00:09:57.661 "dma_device_id": "system", 00:09:57.661 "dma_device_type": 1 00:09:57.661 }, 00:09:57.661 { 00:09:57.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.661 "dma_device_type": 2 00:09:57.661 }, 00:09:57.661 { 00:09:57.661 "dma_device_id": "system", 00:09:57.661 "dma_device_type": 1 00:09:57.661 }, 00:09:57.661 { 00:09:57.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.661 "dma_device_type": 2 00:09:57.661 }, 00:09:57.661 { 00:09:57.661 "dma_device_id": "system", 00:09:57.661 "dma_device_type": 1 00:09:57.661 }, 00:09:57.661 { 00:09:57.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.662 "dma_device_type": 2 00:09:57.662 } 00:09:57.662 ], 00:09:57.662 "driver_specific": { 00:09:57.662 "raid": { 00:09:57.662 "uuid": "8463ba9f-990e-4a4c-b4cc-b37f24e96dda", 00:09:57.662 "strip_size_kb": 64, 00:09:57.662 "state": "online", 00:09:57.662 "raid_level": "raid0", 00:09:57.662 "superblock": true, 00:09:57.662 "num_base_bdevs": 4, 00:09:57.662 "num_base_bdevs_discovered": 4, 00:09:57.662 "num_base_bdevs_operational": 4, 00:09:57.662 "base_bdevs_list": [ 00:09:57.662 { 00:09:57.662 "name": "pt1", 00:09:57.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.662 "is_configured": true, 00:09:57.662 "data_offset": 2048, 00:09:57.662 "data_size": 63488 00:09:57.662 }, 00:09:57.662 { 00:09:57.662 "name": "pt2", 00:09:57.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.662 "is_configured": true, 00:09:57.662 "data_offset": 2048, 00:09:57.662 "data_size": 63488 00:09:57.662 }, 00:09:57.662 { 00:09:57.662 "name": "pt3", 00:09:57.662 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.662 "is_configured": true, 00:09:57.662 "data_offset": 2048, 00:09:57.662 "data_size": 63488 00:09:57.662 }, 00:09:57.662 { 00:09:57.662 "name": "pt4", 00:09:57.662 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:57.662 "is_configured": true, 00:09:57.662 "data_offset": 2048, 00:09:57.662 "data_size": 63488 00:09:57.662 } 00:09:57.662 ] 00:09:57.662 } 00:09:57.662 } 00:09:57.662 }' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:57.662 pt2 00:09:57.662 pt3 00:09:57.662 pt4' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.662 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.662 [2024-11-20 13:23:39.309120] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8463ba9f-990e-4a4c-b4cc-b37f24e96dda 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8463ba9f-990e-4a4c-b4cc-b37f24e96dda ']' 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.929 [2024-11-20 13:23:39.340698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.929 [2024-11-20 13:23:39.340746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.929 [2024-11-20 13:23:39.340862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.929 [2024-11-20 13:23:39.340971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.929 [2024-11-20 13:23:39.341010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.929 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.929 [2024-11-20 13:23:39.500492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:57.929 [2024-11-20 13:23:39.502461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:57.929 [2024-11-20 13:23:39.502540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:57.929 [2024-11-20 13:23:39.502576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:57.929 [2024-11-20 13:23:39.502635] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:57.929 [2024-11-20 13:23:39.502713] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:57.929 [2024-11-20 13:23:39.502738] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:57.929 [2024-11-20 13:23:39.502758] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:57.929 [2024-11-20 13:23:39.502777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.929 [2024-11-20 13:23:39.502789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:57.929 request: 00:09:57.929 { 00:09:57.929 "name": "raid_bdev1", 00:09:57.929 "raid_level": "raid0", 00:09:57.929 "base_bdevs": [ 00:09:57.929 "malloc1", 00:09:57.930 "malloc2", 00:09:57.930 "malloc3", 00:09:57.930 "malloc4" 00:09:57.930 ], 00:09:57.930 "strip_size_kb": 64, 00:09:57.930 "superblock": false, 00:09:57.930 "method": "bdev_raid_create", 00:09:57.930 "req_id": 1 00:09:57.930 } 00:09:57.930 Got JSON-RPC error response 00:09:57.930 response: 00:09:57.930 { 00:09:57.930 "code": -17, 00:09:57.930 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:57.930 } 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.930 [2024-11-20 13:23:39.568294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:57.930 [2024-11-20 13:23:39.568364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.930 [2024-11-20 13:23:39.568390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:57.930 [2024-11-20 13:23:39.568400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.930 [2024-11-20 13:23:39.570635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.930 [2024-11-20 13:23:39.570675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:57.930 [2024-11-20 13:23:39.570764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:57.930 [2024-11-20 13:23:39.570807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:57.930 pt1 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.930 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.190 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.190 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.190 "name": "raid_bdev1", 00:09:58.190 "uuid": "8463ba9f-990e-4a4c-b4cc-b37f24e96dda", 00:09:58.190 "strip_size_kb": 64, 00:09:58.190 "state": "configuring", 00:09:58.190 "raid_level": "raid0", 00:09:58.190 "superblock": true, 00:09:58.190 "num_base_bdevs": 4, 00:09:58.190 "num_base_bdevs_discovered": 1, 00:09:58.190 "num_base_bdevs_operational": 4, 00:09:58.190 "base_bdevs_list": [ 00:09:58.190 { 00:09:58.190 "name": "pt1", 00:09:58.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.190 "is_configured": true, 00:09:58.190 "data_offset": 2048, 00:09:58.190 "data_size": 63488 00:09:58.190 }, 00:09:58.190 { 00:09:58.190 "name": null, 00:09:58.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.190 "is_configured": false, 00:09:58.190 "data_offset": 2048, 00:09:58.190 "data_size": 63488 00:09:58.190 }, 00:09:58.190 { 00:09:58.190 "name": null, 00:09:58.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.190 "is_configured": false, 00:09:58.190 "data_offset": 2048, 00:09:58.190 "data_size": 63488 00:09:58.190 }, 00:09:58.190 { 00:09:58.190 "name": null, 00:09:58.190 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:58.190 "is_configured": false, 00:09:58.190 "data_offset": 2048, 00:09:58.190 "data_size": 63488 00:09:58.190 } 00:09:58.190 ] 00:09:58.190 }' 00:09:58.190 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.190 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.450 [2024-11-20 13:23:39.963668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:58.450 [2024-11-20 13:23:39.963747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.450 [2024-11-20 13:23:39.963772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:58.450 [2024-11-20 13:23:39.963784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.450 [2024-11-20 13:23:39.964234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.450 [2024-11-20 13:23:39.964264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:58.450 [2024-11-20 13:23:39.964355] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:58.450 [2024-11-20 13:23:39.964385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:58.450 pt2 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.450 [2024-11-20 13:23:39.975662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.450 13:23:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.450 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.450 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.450 "name": "raid_bdev1", 00:09:58.450 "uuid": "8463ba9f-990e-4a4c-b4cc-b37f24e96dda", 00:09:58.450 "strip_size_kb": 64, 00:09:58.450 "state": "configuring", 00:09:58.450 "raid_level": "raid0", 00:09:58.450 "superblock": true, 00:09:58.450 "num_base_bdevs": 4, 00:09:58.450 "num_base_bdevs_discovered": 1, 00:09:58.450 "num_base_bdevs_operational": 4, 00:09:58.450 "base_bdevs_list": [ 00:09:58.450 { 00:09:58.450 "name": "pt1", 00:09:58.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.450 "is_configured": true, 00:09:58.450 "data_offset": 2048, 00:09:58.450 "data_size": 63488 00:09:58.450 }, 00:09:58.450 { 00:09:58.450 "name": null, 00:09:58.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.450 "is_configured": false, 00:09:58.450 "data_offset": 0, 00:09:58.450 "data_size": 63488 00:09:58.450 }, 00:09:58.450 { 00:09:58.450 "name": null, 00:09:58.450 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.450 "is_configured": false, 00:09:58.450 "data_offset": 2048, 00:09:58.450 "data_size": 63488 00:09:58.450 }, 00:09:58.450 { 00:09:58.450 "name": null, 00:09:58.450 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:58.450 "is_configured": false, 00:09:58.450 "data_offset": 2048, 00:09:58.450 "data_size": 63488 00:09:58.450 } 00:09:58.450 ] 00:09:58.450 }' 00:09:58.450 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.450 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.031 [2024-11-20 13:23:40.402980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:59.031 [2024-11-20 13:23:40.403111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.031 [2024-11-20 13:23:40.403137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:59.031 [2024-11-20 13:23:40.403152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.031 [2024-11-20 13:23:40.403637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.031 [2024-11-20 13:23:40.403674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:59.031 [2024-11-20 13:23:40.403771] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:59.031 [2024-11-20 13:23:40.403810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:59.031 pt2 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.031 [2024-11-20 13:23:40.414928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:59.031 [2024-11-20 13:23:40.415027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.031 [2024-11-20 13:23:40.415053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:59.031 [2024-11-20 13:23:40.415067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.031 [2024-11-20 13:23:40.415592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.031 [2024-11-20 13:23:40.415637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:59.031 [2024-11-20 13:23:40.415737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:59.031 [2024-11-20 13:23:40.415773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:59.031 pt3 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.031 [2024-11-20 13:23:40.426886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:59.031 [2024-11-20 13:23:40.426958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.031 [2024-11-20 13:23:40.426978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:59.031 [2024-11-20 13:23:40.427007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.031 [2024-11-20 13:23:40.427372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.031 [2024-11-20 13:23:40.427404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:59.031 [2024-11-20 13:23:40.427476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:59.031 [2024-11-20 13:23:40.427501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:59.031 [2024-11-20 13:23:40.427638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:59.031 [2024-11-20 13:23:40.427654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:59.031 [2024-11-20 13:23:40.427921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:59.031 [2024-11-20 13:23:40.428093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:59.031 [2024-11-20 13:23:40.428109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:59.031 [2024-11-20 13:23:40.428233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.031 pt4 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.031 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.032 "name": "raid_bdev1", 00:09:59.032 "uuid": "8463ba9f-990e-4a4c-b4cc-b37f24e96dda", 00:09:59.032 "strip_size_kb": 64, 00:09:59.032 "state": "online", 00:09:59.032 "raid_level": "raid0", 00:09:59.032 "superblock": true, 00:09:59.032 "num_base_bdevs": 4, 00:09:59.032 "num_base_bdevs_discovered": 4, 00:09:59.032 "num_base_bdevs_operational": 4, 00:09:59.032 "base_bdevs_list": [ 00:09:59.032 { 00:09:59.032 "name": "pt1", 00:09:59.032 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.032 "is_configured": true, 00:09:59.032 "data_offset": 2048, 00:09:59.032 "data_size": 63488 00:09:59.032 }, 00:09:59.032 { 00:09:59.032 "name": "pt2", 00:09:59.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.032 "is_configured": true, 00:09:59.032 "data_offset": 2048, 00:09:59.032 "data_size": 63488 00:09:59.032 }, 00:09:59.032 { 00:09:59.032 "name": "pt3", 00:09:59.032 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.032 "is_configured": true, 00:09:59.032 "data_offset": 2048, 00:09:59.032 "data_size": 63488 00:09:59.032 }, 00:09:59.032 { 00:09:59.032 "name": "pt4", 00:09:59.032 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:59.032 "is_configured": true, 00:09:59.032 "data_offset": 2048, 00:09:59.032 "data_size": 63488 00:09:59.032 } 00:09:59.032 ] 00:09:59.032 }' 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.032 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.292 [2024-11-20 13:23:40.902484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.292 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.292 "name": "raid_bdev1", 00:09:59.292 "aliases": [ 00:09:59.292 "8463ba9f-990e-4a4c-b4cc-b37f24e96dda" 00:09:59.292 ], 00:09:59.292 "product_name": "Raid Volume", 00:09:59.292 "block_size": 512, 00:09:59.292 "num_blocks": 253952, 00:09:59.292 "uuid": "8463ba9f-990e-4a4c-b4cc-b37f24e96dda", 00:09:59.292 "assigned_rate_limits": { 00:09:59.292 "rw_ios_per_sec": 0, 00:09:59.292 "rw_mbytes_per_sec": 0, 00:09:59.292 "r_mbytes_per_sec": 0, 00:09:59.292 "w_mbytes_per_sec": 0 00:09:59.292 }, 00:09:59.292 "claimed": false, 00:09:59.292 "zoned": false, 00:09:59.292 "supported_io_types": { 00:09:59.292 "read": true, 00:09:59.292 "write": true, 00:09:59.292 "unmap": true, 00:09:59.292 "flush": true, 00:09:59.292 "reset": true, 00:09:59.292 "nvme_admin": false, 00:09:59.292 "nvme_io": false, 00:09:59.292 "nvme_io_md": false, 00:09:59.292 "write_zeroes": true, 00:09:59.292 "zcopy": false, 00:09:59.292 "get_zone_info": false, 00:09:59.292 "zone_management": false, 00:09:59.292 "zone_append": false, 00:09:59.292 "compare": false, 00:09:59.292 "compare_and_write": false, 00:09:59.292 "abort": false, 00:09:59.292 "seek_hole": false, 00:09:59.292 "seek_data": false, 00:09:59.292 "copy": false, 00:09:59.292 "nvme_iov_md": false 00:09:59.292 }, 00:09:59.292 "memory_domains": [ 00:09:59.292 { 00:09:59.292 "dma_device_id": "system", 00:09:59.292 "dma_device_type": 1 00:09:59.292 }, 00:09:59.292 { 00:09:59.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.292 "dma_device_type": 2 00:09:59.292 }, 00:09:59.292 { 00:09:59.292 "dma_device_id": "system", 00:09:59.292 "dma_device_type": 1 00:09:59.292 }, 00:09:59.292 { 00:09:59.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.292 "dma_device_type": 2 00:09:59.292 }, 00:09:59.292 { 00:09:59.292 "dma_device_id": "system", 00:09:59.292 "dma_device_type": 1 00:09:59.292 }, 00:09:59.292 { 00:09:59.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.292 "dma_device_type": 2 00:09:59.292 }, 00:09:59.292 { 00:09:59.292 "dma_device_id": "system", 00:09:59.292 "dma_device_type": 1 00:09:59.292 }, 00:09:59.292 { 00:09:59.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.292 "dma_device_type": 2 00:09:59.292 } 00:09:59.292 ], 00:09:59.292 "driver_specific": { 00:09:59.292 "raid": { 00:09:59.293 "uuid": "8463ba9f-990e-4a4c-b4cc-b37f24e96dda", 00:09:59.293 "strip_size_kb": 64, 00:09:59.293 "state": "online", 00:09:59.293 "raid_level": "raid0", 00:09:59.293 "superblock": true, 00:09:59.293 "num_base_bdevs": 4, 00:09:59.293 "num_base_bdevs_discovered": 4, 00:09:59.293 "num_base_bdevs_operational": 4, 00:09:59.293 "base_bdevs_list": [ 00:09:59.293 { 00:09:59.293 "name": "pt1", 00:09:59.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.293 "is_configured": true, 00:09:59.293 "data_offset": 2048, 00:09:59.293 "data_size": 63488 00:09:59.293 }, 00:09:59.293 { 00:09:59.293 "name": "pt2", 00:09:59.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.293 "is_configured": true, 00:09:59.293 "data_offset": 2048, 00:09:59.293 "data_size": 63488 00:09:59.293 }, 00:09:59.293 { 00:09:59.293 "name": "pt3", 00:09:59.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.293 "is_configured": true, 00:09:59.293 "data_offset": 2048, 00:09:59.293 "data_size": 63488 00:09:59.293 }, 00:09:59.293 { 00:09:59.293 "name": "pt4", 00:09:59.293 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:59.293 "is_configured": true, 00:09:59.293 "data_offset": 2048, 00:09:59.293 "data_size": 63488 00:09:59.293 } 00:09:59.293 ] 00:09:59.293 } 00:09:59.293 } 00:09:59.293 }' 00:09:59.293 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.553 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:59.553 pt2 00:09:59.553 pt3 00:09:59.553 pt4' 00:09:59.553 13:23:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.553 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.813 [2024-11-20 13:23:41.237817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8463ba9f-990e-4a4c-b4cc-b37f24e96dda '!=' 8463ba9f-990e-4a4c-b4cc-b37f24e96dda ']' 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81355 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81355 ']' 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81355 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.813 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81355 00:09:59.813 killing process with pid 81355 00:09:59.814 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.814 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.814 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81355' 00:09:59.814 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 81355 00:09:59.814 [2024-11-20 13:23:41.305045] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.814 [2024-11-20 13:23:41.305149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.814 [2024-11-20 13:23:41.305222] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.814 [2024-11-20 13:23:41.305236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:59.814 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 81355 00:09:59.814 [2024-11-20 13:23:41.350308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.074 13:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:00.074 00:10:00.074 real 0m4.034s 00:10:00.074 user 0m6.306s 00:10:00.074 sys 0m0.935s 00:10:00.074 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.074 13:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.074 ************************************ 00:10:00.074 END TEST raid_superblock_test 00:10:00.074 ************************************ 00:10:00.074 13:23:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:00.074 13:23:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:00.074 13:23:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.074 13:23:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.074 ************************************ 00:10:00.074 START TEST raid_read_error_test 00:10:00.074 ************************************ 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0dh7dhozpM 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81603 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81603 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 81603 ']' 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.074 13:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.074 [2024-11-20 13:23:41.726433] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:00.074 [2024-11-20 13:23:41.726567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81603 ] 00:10:00.333 [2024-11-20 13:23:41.859723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.333 [2024-11-20 13:23:41.888128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.333 [2024-11-20 13:23:41.932979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.333 [2024-11-20 13:23:41.933037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.273 BaseBdev1_malloc 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.273 true 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.273 [2024-11-20 13:23:42.608590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:01.273 [2024-11-20 13:23:42.608658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.273 [2024-11-20 13:23:42.608682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:01.273 [2024-11-20 13:23:42.608693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.273 [2024-11-20 13:23:42.611002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.273 [2024-11-20 13:23:42.611054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:01.273 BaseBdev1 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.273 BaseBdev2_malloc 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.273 true 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.273 [2024-11-20 13:23:42.650155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:01.273 [2024-11-20 13:23:42.650211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.273 [2024-11-20 13:23:42.650231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:01.273 [2024-11-20 13:23:42.650251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.273 [2024-11-20 13:23:42.652507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.273 [2024-11-20 13:23:42.652556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:01.273 BaseBdev2 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.273 BaseBdev3_malloc 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.273 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.273 true 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.274 [2024-11-20 13:23:42.691333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:01.274 [2024-11-20 13:23:42.691389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.274 [2024-11-20 13:23:42.691412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:01.274 [2024-11-20 13:23:42.691422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.274 [2024-11-20 13:23:42.693562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.274 [2024-11-20 13:23:42.693602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:01.274 BaseBdev3 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.274 BaseBdev4_malloc 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.274 true 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.274 [2024-11-20 13:23:42.742757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:01.274 [2024-11-20 13:23:42.742820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.274 [2024-11-20 13:23:42.742847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:01.274 [2024-11-20 13:23:42.742858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.274 [2024-11-20 13:23:42.744988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.274 [2024-11-20 13:23:42.745042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:01.274 BaseBdev4 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.274 [2024-11-20 13:23:42.754777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.274 [2024-11-20 13:23:42.756733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.274 [2024-11-20 13:23:42.756821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.274 [2024-11-20 13:23:42.756881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:01.274 [2024-11-20 13:23:42.757099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:01.274 [2024-11-20 13:23:42.757122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:01.274 [2024-11-20 13:23:42.757410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:01.274 [2024-11-20 13:23:42.757575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:01.274 [2024-11-20 13:23:42.757599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:01.274 [2024-11-20 13:23:42.757754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.274 "name": "raid_bdev1", 00:10:01.274 "uuid": "6d9bc426-7d98-47c9-bec6-8d29e769c26a", 00:10:01.274 "strip_size_kb": 64, 00:10:01.274 "state": "online", 00:10:01.274 "raid_level": "raid0", 00:10:01.274 "superblock": true, 00:10:01.274 "num_base_bdevs": 4, 00:10:01.274 "num_base_bdevs_discovered": 4, 00:10:01.274 "num_base_bdevs_operational": 4, 00:10:01.274 "base_bdevs_list": [ 00:10:01.274 { 00:10:01.274 "name": "BaseBdev1", 00:10:01.274 "uuid": "ef4a7bfb-fd6e-5420-abe6-47fd975491d4", 00:10:01.274 "is_configured": true, 00:10:01.274 "data_offset": 2048, 00:10:01.274 "data_size": 63488 00:10:01.274 }, 00:10:01.274 { 00:10:01.274 "name": "BaseBdev2", 00:10:01.274 "uuid": "d110cde4-e7b2-532d-83de-a532bc946bfc", 00:10:01.274 "is_configured": true, 00:10:01.274 "data_offset": 2048, 00:10:01.274 "data_size": 63488 00:10:01.274 }, 00:10:01.274 { 00:10:01.274 "name": "BaseBdev3", 00:10:01.274 "uuid": "7a4eb701-251e-5198-8146-4764e50a51e6", 00:10:01.274 "is_configured": true, 00:10:01.274 "data_offset": 2048, 00:10:01.274 "data_size": 63488 00:10:01.274 }, 00:10:01.274 { 00:10:01.274 "name": "BaseBdev4", 00:10:01.274 "uuid": "04d29595-b2cf-56cf-b921-73a131288f41", 00:10:01.274 "is_configured": true, 00:10:01.274 "data_offset": 2048, 00:10:01.274 "data_size": 63488 00:10:01.274 } 00:10:01.274 ] 00:10:01.274 }' 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.274 13:23:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.535 13:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:01.535 13:23:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:01.794 [2024-11-20 13:23:43.282350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.734 "name": "raid_bdev1", 00:10:02.734 "uuid": "6d9bc426-7d98-47c9-bec6-8d29e769c26a", 00:10:02.734 "strip_size_kb": 64, 00:10:02.734 "state": "online", 00:10:02.734 "raid_level": "raid0", 00:10:02.734 "superblock": true, 00:10:02.734 "num_base_bdevs": 4, 00:10:02.734 "num_base_bdevs_discovered": 4, 00:10:02.734 "num_base_bdevs_operational": 4, 00:10:02.734 "base_bdevs_list": [ 00:10:02.734 { 00:10:02.734 "name": "BaseBdev1", 00:10:02.734 "uuid": "ef4a7bfb-fd6e-5420-abe6-47fd975491d4", 00:10:02.734 "is_configured": true, 00:10:02.734 "data_offset": 2048, 00:10:02.734 "data_size": 63488 00:10:02.734 }, 00:10:02.734 { 00:10:02.734 "name": "BaseBdev2", 00:10:02.734 "uuid": "d110cde4-e7b2-532d-83de-a532bc946bfc", 00:10:02.734 "is_configured": true, 00:10:02.734 "data_offset": 2048, 00:10:02.734 "data_size": 63488 00:10:02.734 }, 00:10:02.734 { 00:10:02.734 "name": "BaseBdev3", 00:10:02.734 "uuid": "7a4eb701-251e-5198-8146-4764e50a51e6", 00:10:02.734 "is_configured": true, 00:10:02.734 "data_offset": 2048, 00:10:02.734 "data_size": 63488 00:10:02.734 }, 00:10:02.734 { 00:10:02.734 "name": "BaseBdev4", 00:10:02.734 "uuid": "04d29595-b2cf-56cf-b921-73a131288f41", 00:10:02.734 "is_configured": true, 00:10:02.734 "data_offset": 2048, 00:10:02.734 "data_size": 63488 00:10:02.734 } 00:10:02.734 ] 00:10:02.734 }' 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.734 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.303 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:03.303 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.303 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.304 [2024-11-20 13:23:44.666732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.304 [2024-11-20 13:23:44.666783] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.304 [2024-11-20 13:23:44.669499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.304 [2024-11-20 13:23:44.669557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.304 [2024-11-20 13:23:44.669607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.304 [2024-11-20 13:23:44.669617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:03.304 { 00:10:03.304 "results": [ 00:10:03.304 { 00:10:03.304 "job": "raid_bdev1", 00:10:03.304 "core_mask": "0x1", 00:10:03.304 "workload": "randrw", 00:10:03.304 "percentage": 50, 00:10:03.304 "status": "finished", 00:10:03.304 "queue_depth": 1, 00:10:03.304 "io_size": 131072, 00:10:03.304 "runtime": 1.385092, 00:10:03.304 "iops": 15383.093686195574, 00:10:03.304 "mibps": 1922.8867107744468, 00:10:03.304 "io_failed": 1, 00:10:03.304 "io_timeout": 0, 00:10:03.304 "avg_latency_us": 90.1330766966996, 00:10:03.304 "min_latency_us": 27.50043668122271, 00:10:03.304 "max_latency_us": 1488.1537117903931 00:10:03.304 } 00:10:03.304 ], 00:10:03.304 "core_count": 1 00:10:03.304 } 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81603 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 81603 ']' 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 81603 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81603 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.304 killing process with pid 81603 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81603' 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 81603 00:10:03.304 [2024-11-20 13:23:44.706587] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 81603 00:10:03.304 [2024-11-20 13:23:44.742828] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0dh7dhozpM 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:03.304 00:10:03.304 real 0m3.332s 00:10:03.304 user 0m4.243s 00:10:03.304 sys 0m0.521s 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.304 13:23:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.304 ************************************ 00:10:03.304 END TEST raid_read_error_test 00:10:03.304 ************************************ 00:10:03.564 13:23:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:03.564 13:23:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.564 13:23:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.564 13:23:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.564 ************************************ 00:10:03.564 START TEST raid_write_error_test 00:10:03.564 ************************************ 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.21RlBJYu8d 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81733 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81733 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 81733 ']' 00:10:03.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.564 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.564 [2024-11-20 13:23:45.142192] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:03.564 [2024-11-20 13:23:45.142404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81733 ] 00:10:03.824 [2024-11-20 13:23:45.275101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.824 [2024-11-20 13:23:45.302460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.824 [2024-11-20 13:23:45.346649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.824 [2024-11-20 13:23:45.346689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.439 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.439 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:04.439 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.439 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:04.439 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.439 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.439 BaseBdev1_malloc 00:10:04.439 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.439 13:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:04.439 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.439 13:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.439 true 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.440 [2024-11-20 13:23:46.018110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:04.440 [2024-11-20 13:23:46.018180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.440 [2024-11-20 13:23:46.018205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:04.440 [2024-11-20 13:23:46.018216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.440 [2024-11-20 13:23:46.020581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.440 [2024-11-20 13:23:46.020682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:04.440 BaseBdev1 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.440 BaseBdev2_malloc 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.440 true 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.440 [2024-11-20 13:23:46.059306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:04.440 [2024-11-20 13:23:46.059411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.440 [2024-11-20 13:23:46.059438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:04.440 [2024-11-20 13:23:46.059459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.440 [2024-11-20 13:23:46.061671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.440 [2024-11-20 13:23:46.061716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:04.440 BaseBdev2 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.440 BaseBdev3_malloc 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.440 true 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.440 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.440 [2024-11-20 13:23:46.100288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:04.440 [2024-11-20 13:23:46.100346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.440 [2024-11-20 13:23:46.100369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:04.440 [2024-11-20 13:23:46.100381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.440 [2024-11-20 13:23:46.102694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.440 [2024-11-20 13:23:46.102781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:04.701 BaseBdev3 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.701 BaseBdev4_malloc 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.701 true 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.701 [2024-11-20 13:23:46.150180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:04.701 [2024-11-20 13:23:46.150296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.701 [2024-11-20 13:23:46.150329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:04.701 [2024-11-20 13:23:46.150341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.701 [2024-11-20 13:23:46.152657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.701 [2024-11-20 13:23:46.152701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:04.701 BaseBdev4 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.701 [2024-11-20 13:23:46.162220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.701 [2024-11-20 13:23:46.164090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.701 [2024-11-20 13:23:46.164176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.701 [2024-11-20 13:23:46.164240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:04.701 [2024-11-20 13:23:46.164450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:04.701 [2024-11-20 13:23:46.164470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:04.701 [2024-11-20 13:23:46.164725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:04.701 [2024-11-20 13:23:46.164880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:04.701 [2024-11-20 13:23:46.164893] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:04.701 [2024-11-20 13:23:46.165040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.701 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.701 "name": "raid_bdev1", 00:10:04.701 "uuid": "fa903637-50f2-41c7-b975-6f6ca0590528", 00:10:04.701 "strip_size_kb": 64, 00:10:04.701 "state": "online", 00:10:04.701 "raid_level": "raid0", 00:10:04.701 "superblock": true, 00:10:04.701 "num_base_bdevs": 4, 00:10:04.701 "num_base_bdevs_discovered": 4, 00:10:04.701 "num_base_bdevs_operational": 4, 00:10:04.701 "base_bdevs_list": [ 00:10:04.701 { 00:10:04.701 "name": "BaseBdev1", 00:10:04.701 "uuid": "1bd2eeec-20c7-5914-a02f-614a1d4926cd", 00:10:04.701 "is_configured": true, 00:10:04.701 "data_offset": 2048, 00:10:04.701 "data_size": 63488 00:10:04.701 }, 00:10:04.702 { 00:10:04.702 "name": "BaseBdev2", 00:10:04.702 "uuid": "35aaf82e-08a2-5f62-aab2-c324eaf3cabe", 00:10:04.702 "is_configured": true, 00:10:04.702 "data_offset": 2048, 00:10:04.702 "data_size": 63488 00:10:04.702 }, 00:10:04.702 { 00:10:04.702 "name": "BaseBdev3", 00:10:04.702 "uuid": "e7f4b971-3b65-5a2d-b008-ff44522bf336", 00:10:04.702 "is_configured": true, 00:10:04.702 "data_offset": 2048, 00:10:04.702 "data_size": 63488 00:10:04.702 }, 00:10:04.702 { 00:10:04.702 "name": "BaseBdev4", 00:10:04.702 "uuid": "bb37569f-b368-5a4b-b242-173625855b8a", 00:10:04.702 "is_configured": true, 00:10:04.702 "data_offset": 2048, 00:10:04.702 "data_size": 63488 00:10:04.702 } 00:10:04.702 ] 00:10:04.702 }' 00:10:04.702 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.702 13:23:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.962 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:04.962 13:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:05.221 [2024-11-20 13:23:46.709709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.160 "name": "raid_bdev1", 00:10:06.160 "uuid": "fa903637-50f2-41c7-b975-6f6ca0590528", 00:10:06.160 "strip_size_kb": 64, 00:10:06.160 "state": "online", 00:10:06.160 "raid_level": "raid0", 00:10:06.160 "superblock": true, 00:10:06.160 "num_base_bdevs": 4, 00:10:06.160 "num_base_bdevs_discovered": 4, 00:10:06.160 "num_base_bdevs_operational": 4, 00:10:06.160 "base_bdevs_list": [ 00:10:06.160 { 00:10:06.160 "name": "BaseBdev1", 00:10:06.160 "uuid": "1bd2eeec-20c7-5914-a02f-614a1d4926cd", 00:10:06.160 "is_configured": true, 00:10:06.160 "data_offset": 2048, 00:10:06.160 "data_size": 63488 00:10:06.160 }, 00:10:06.160 { 00:10:06.160 "name": "BaseBdev2", 00:10:06.160 "uuid": "35aaf82e-08a2-5f62-aab2-c324eaf3cabe", 00:10:06.160 "is_configured": true, 00:10:06.160 "data_offset": 2048, 00:10:06.160 "data_size": 63488 00:10:06.160 }, 00:10:06.160 { 00:10:06.160 "name": "BaseBdev3", 00:10:06.160 "uuid": "e7f4b971-3b65-5a2d-b008-ff44522bf336", 00:10:06.160 "is_configured": true, 00:10:06.160 "data_offset": 2048, 00:10:06.160 "data_size": 63488 00:10:06.160 }, 00:10:06.160 { 00:10:06.160 "name": "BaseBdev4", 00:10:06.160 "uuid": "bb37569f-b368-5a4b-b242-173625855b8a", 00:10:06.160 "is_configured": true, 00:10:06.160 "data_offset": 2048, 00:10:06.160 "data_size": 63488 00:10:06.160 } 00:10:06.160 ] 00:10:06.160 }' 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.160 13:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.419 13:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:06.419 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.419 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.419 [2024-11-20 13:23:48.077908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.419 [2024-11-20 13:23:48.078036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.419 [2024-11-20 13:23:48.080678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.419 [2024-11-20 13:23:48.080786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.419 [2024-11-20 13:23:48.080863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.419 [2024-11-20 13:23:48.080924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:06.419 { 00:10:06.419 "results": [ 00:10:06.419 { 00:10:06.419 "job": "raid_bdev1", 00:10:06.419 "core_mask": "0x1", 00:10:06.419 "workload": "randrw", 00:10:06.419 "percentage": 50, 00:10:06.419 "status": "finished", 00:10:06.419 "queue_depth": 1, 00:10:06.419 "io_size": 131072, 00:10:06.419 "runtime": 1.368999, 00:10:06.419 "iops": 15593.875525109952, 00:10:06.419 "mibps": 1949.234440638744, 00:10:06.419 "io_failed": 1, 00:10:06.419 "io_timeout": 0, 00:10:06.419 "avg_latency_us": 88.93433949945192, 00:10:06.419 "min_latency_us": 27.388646288209607, 00:10:06.419 "max_latency_us": 1516.7720524017468 00:10:06.419 } 00:10:06.419 ], 00:10:06.419 "core_count": 1 00:10:06.419 } 00:10:06.419 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.419 13:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81733 00:10:06.419 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 81733 ']' 00:10:06.419 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 81733 00:10:06.680 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:06.680 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.680 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81733 00:10:06.680 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.680 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.680 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81733' 00:10:06.680 killing process with pid 81733 00:10:06.680 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 81733 00:10:06.680 [2024-11-20 13:23:48.121611] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.680 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 81733 00:10:06.680 [2024-11-20 13:23:48.157805] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.940 13:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:06.940 13:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.21RlBJYu8d 00:10:06.940 13:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:06.940 13:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:06.940 13:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:06.940 13:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.940 13:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:06.940 13:23:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:06.940 ************************************ 00:10:06.940 END TEST raid_write_error_test 00:10:06.940 ************************************ 00:10:06.940 00:10:06.940 real 0m3.349s 00:10:06.940 user 0m4.268s 00:10:06.940 sys 0m0.505s 00:10:06.940 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.940 13:23:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.940 13:23:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:06.940 13:23:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:06.940 13:23:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:06.940 13:23:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.940 13:23:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.940 ************************************ 00:10:06.940 START TEST raid_state_function_test 00:10:06.940 ************************************ 00:10:06.940 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:06.940 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:06.940 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:06.940 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:06.940 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:06.940 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:06.940 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.940 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81865 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81865' 00:10:06.941 Process raid pid: 81865 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81865 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 81865 ']' 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.941 13:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.941 [2024-11-20 13:23:48.540507] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:06.941 [2024-11-20 13:23:48.540646] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.200 [2024-11-20 13:23:48.694656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.200 [2024-11-20 13:23:48.724438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.200 [2024-11-20 13:23:48.768845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.200 [2024-11-20 13:23:48.769016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.768 [2024-11-20 13:23:49.395696] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.768 [2024-11-20 13:23:49.395848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.768 [2024-11-20 13:23:49.395878] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.768 [2024-11-20 13:23:49.395893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.768 [2024-11-20 13:23:49.395901] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.768 [2024-11-20 13:23:49.395915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.768 [2024-11-20 13:23:49.395923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:07.768 [2024-11-20 13:23:49.395934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.768 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.028 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.028 "name": "Existed_Raid", 00:10:08.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.028 "strip_size_kb": 64, 00:10:08.028 "state": "configuring", 00:10:08.028 "raid_level": "concat", 00:10:08.028 "superblock": false, 00:10:08.028 "num_base_bdevs": 4, 00:10:08.028 "num_base_bdevs_discovered": 0, 00:10:08.028 "num_base_bdevs_operational": 4, 00:10:08.028 "base_bdevs_list": [ 00:10:08.028 { 00:10:08.028 "name": "BaseBdev1", 00:10:08.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.028 "is_configured": false, 00:10:08.028 "data_offset": 0, 00:10:08.028 "data_size": 0 00:10:08.028 }, 00:10:08.028 { 00:10:08.028 "name": "BaseBdev2", 00:10:08.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.028 "is_configured": false, 00:10:08.028 "data_offset": 0, 00:10:08.028 "data_size": 0 00:10:08.028 }, 00:10:08.028 { 00:10:08.028 "name": "BaseBdev3", 00:10:08.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.028 "is_configured": false, 00:10:08.028 "data_offset": 0, 00:10:08.028 "data_size": 0 00:10:08.028 }, 00:10:08.028 { 00:10:08.028 "name": "BaseBdev4", 00:10:08.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.028 "is_configured": false, 00:10:08.028 "data_offset": 0, 00:10:08.028 "data_size": 0 00:10:08.028 } 00:10:08.028 ] 00:10:08.028 }' 00:10:08.028 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.028 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.288 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.288 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.288 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.288 [2024-11-20 13:23:49.838768] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.288 [2024-11-20 13:23:49.838914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:08.288 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.288 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.288 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.288 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.288 [2024-11-20 13:23:49.850792] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.288 [2024-11-20 13:23:49.850934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.288 [2024-11-20 13:23:49.850967] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.288 [2024-11-20 13:23:49.851010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.288 [2024-11-20 13:23:49.851034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.288 [2024-11-20 13:23:49.851061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.288 [2024-11-20 13:23:49.851083] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.288 [2024-11-20 13:23:49.851111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.288 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.288 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.289 [2024-11-20 13:23:49.872281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.289 BaseBdev1 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.289 [ 00:10:08.289 { 00:10:08.289 "name": "BaseBdev1", 00:10:08.289 "aliases": [ 00:10:08.289 "0a260600-553b-4f34-b274-59ffbccd64d5" 00:10:08.289 ], 00:10:08.289 "product_name": "Malloc disk", 00:10:08.289 "block_size": 512, 00:10:08.289 "num_blocks": 65536, 00:10:08.289 "uuid": "0a260600-553b-4f34-b274-59ffbccd64d5", 00:10:08.289 "assigned_rate_limits": { 00:10:08.289 "rw_ios_per_sec": 0, 00:10:08.289 "rw_mbytes_per_sec": 0, 00:10:08.289 "r_mbytes_per_sec": 0, 00:10:08.289 "w_mbytes_per_sec": 0 00:10:08.289 }, 00:10:08.289 "claimed": true, 00:10:08.289 "claim_type": "exclusive_write", 00:10:08.289 "zoned": false, 00:10:08.289 "supported_io_types": { 00:10:08.289 "read": true, 00:10:08.289 "write": true, 00:10:08.289 "unmap": true, 00:10:08.289 "flush": true, 00:10:08.289 "reset": true, 00:10:08.289 "nvme_admin": false, 00:10:08.289 "nvme_io": false, 00:10:08.289 "nvme_io_md": false, 00:10:08.289 "write_zeroes": true, 00:10:08.289 "zcopy": true, 00:10:08.289 "get_zone_info": false, 00:10:08.289 "zone_management": false, 00:10:08.289 "zone_append": false, 00:10:08.289 "compare": false, 00:10:08.289 "compare_and_write": false, 00:10:08.289 "abort": true, 00:10:08.289 "seek_hole": false, 00:10:08.289 "seek_data": false, 00:10:08.289 "copy": true, 00:10:08.289 "nvme_iov_md": false 00:10:08.289 }, 00:10:08.289 "memory_domains": [ 00:10:08.289 { 00:10:08.289 "dma_device_id": "system", 00:10:08.289 "dma_device_type": 1 00:10:08.289 }, 00:10:08.289 { 00:10:08.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.289 "dma_device_type": 2 00:10:08.289 } 00:10:08.289 ], 00:10:08.289 "driver_specific": {} 00:10:08.289 } 00:10:08.289 ] 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.289 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.549 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.549 "name": "Existed_Raid", 00:10:08.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.549 "strip_size_kb": 64, 00:10:08.549 "state": "configuring", 00:10:08.549 "raid_level": "concat", 00:10:08.549 "superblock": false, 00:10:08.549 "num_base_bdevs": 4, 00:10:08.549 "num_base_bdevs_discovered": 1, 00:10:08.549 "num_base_bdevs_operational": 4, 00:10:08.549 "base_bdevs_list": [ 00:10:08.549 { 00:10:08.549 "name": "BaseBdev1", 00:10:08.549 "uuid": "0a260600-553b-4f34-b274-59ffbccd64d5", 00:10:08.549 "is_configured": true, 00:10:08.549 "data_offset": 0, 00:10:08.549 "data_size": 65536 00:10:08.549 }, 00:10:08.549 { 00:10:08.549 "name": "BaseBdev2", 00:10:08.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.549 "is_configured": false, 00:10:08.549 "data_offset": 0, 00:10:08.549 "data_size": 0 00:10:08.549 }, 00:10:08.549 { 00:10:08.549 "name": "BaseBdev3", 00:10:08.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.549 "is_configured": false, 00:10:08.550 "data_offset": 0, 00:10:08.550 "data_size": 0 00:10:08.550 }, 00:10:08.550 { 00:10:08.550 "name": "BaseBdev4", 00:10:08.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.550 "is_configured": false, 00:10:08.550 "data_offset": 0, 00:10:08.550 "data_size": 0 00:10:08.550 } 00:10:08.550 ] 00:10:08.550 }' 00:10:08.550 13:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.550 13:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.809 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.809 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.809 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.809 [2024-11-20 13:23:50.351642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.809 [2024-11-20 13:23:50.351711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.810 [2024-11-20 13:23:50.359664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.810 [2024-11-20 13:23:50.361808] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.810 [2024-11-20 13:23:50.361860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.810 [2024-11-20 13:23:50.361872] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.810 [2024-11-20 13:23:50.361883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.810 [2024-11-20 13:23:50.361892] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.810 [2024-11-20 13:23:50.361902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.810 "name": "Existed_Raid", 00:10:08.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.810 "strip_size_kb": 64, 00:10:08.810 "state": "configuring", 00:10:08.810 "raid_level": "concat", 00:10:08.810 "superblock": false, 00:10:08.810 "num_base_bdevs": 4, 00:10:08.810 "num_base_bdevs_discovered": 1, 00:10:08.810 "num_base_bdevs_operational": 4, 00:10:08.810 "base_bdevs_list": [ 00:10:08.810 { 00:10:08.810 "name": "BaseBdev1", 00:10:08.810 "uuid": "0a260600-553b-4f34-b274-59ffbccd64d5", 00:10:08.810 "is_configured": true, 00:10:08.810 "data_offset": 0, 00:10:08.810 "data_size": 65536 00:10:08.810 }, 00:10:08.810 { 00:10:08.810 "name": "BaseBdev2", 00:10:08.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.810 "is_configured": false, 00:10:08.810 "data_offset": 0, 00:10:08.810 "data_size": 0 00:10:08.810 }, 00:10:08.810 { 00:10:08.810 "name": "BaseBdev3", 00:10:08.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.810 "is_configured": false, 00:10:08.810 "data_offset": 0, 00:10:08.810 "data_size": 0 00:10:08.810 }, 00:10:08.810 { 00:10:08.810 "name": "BaseBdev4", 00:10:08.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.810 "is_configured": false, 00:10:08.810 "data_offset": 0, 00:10:08.810 "data_size": 0 00:10:08.810 } 00:10:08.810 ] 00:10:08.810 }' 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.810 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.408 [2024-11-20 13:23:50.842233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.408 BaseBdev2 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.408 [ 00:10:09.408 { 00:10:09.408 "name": "BaseBdev2", 00:10:09.408 "aliases": [ 00:10:09.408 "119ac189-4d88-4341-8f55-1bef9080e4a6" 00:10:09.408 ], 00:10:09.408 "product_name": "Malloc disk", 00:10:09.408 "block_size": 512, 00:10:09.408 "num_blocks": 65536, 00:10:09.408 "uuid": "119ac189-4d88-4341-8f55-1bef9080e4a6", 00:10:09.408 "assigned_rate_limits": { 00:10:09.408 "rw_ios_per_sec": 0, 00:10:09.408 "rw_mbytes_per_sec": 0, 00:10:09.408 "r_mbytes_per_sec": 0, 00:10:09.408 "w_mbytes_per_sec": 0 00:10:09.408 }, 00:10:09.408 "claimed": true, 00:10:09.408 "claim_type": "exclusive_write", 00:10:09.408 "zoned": false, 00:10:09.408 "supported_io_types": { 00:10:09.408 "read": true, 00:10:09.408 "write": true, 00:10:09.408 "unmap": true, 00:10:09.408 "flush": true, 00:10:09.408 "reset": true, 00:10:09.408 "nvme_admin": false, 00:10:09.408 "nvme_io": false, 00:10:09.408 "nvme_io_md": false, 00:10:09.408 "write_zeroes": true, 00:10:09.408 "zcopy": true, 00:10:09.408 "get_zone_info": false, 00:10:09.408 "zone_management": false, 00:10:09.408 "zone_append": false, 00:10:09.408 "compare": false, 00:10:09.408 "compare_and_write": false, 00:10:09.408 "abort": true, 00:10:09.408 "seek_hole": false, 00:10:09.408 "seek_data": false, 00:10:09.408 "copy": true, 00:10:09.408 "nvme_iov_md": false 00:10:09.408 }, 00:10:09.408 "memory_domains": [ 00:10:09.408 { 00:10:09.408 "dma_device_id": "system", 00:10:09.408 "dma_device_type": 1 00:10:09.408 }, 00:10:09.408 { 00:10:09.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.408 "dma_device_type": 2 00:10:09.408 } 00:10:09.408 ], 00:10:09.408 "driver_specific": {} 00:10:09.408 } 00:10:09.408 ] 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.408 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.409 "name": "Existed_Raid", 00:10:09.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.409 "strip_size_kb": 64, 00:10:09.409 "state": "configuring", 00:10:09.409 "raid_level": "concat", 00:10:09.409 "superblock": false, 00:10:09.409 "num_base_bdevs": 4, 00:10:09.409 "num_base_bdevs_discovered": 2, 00:10:09.409 "num_base_bdevs_operational": 4, 00:10:09.409 "base_bdevs_list": [ 00:10:09.409 { 00:10:09.409 "name": "BaseBdev1", 00:10:09.409 "uuid": "0a260600-553b-4f34-b274-59ffbccd64d5", 00:10:09.409 "is_configured": true, 00:10:09.409 "data_offset": 0, 00:10:09.409 "data_size": 65536 00:10:09.409 }, 00:10:09.409 { 00:10:09.409 "name": "BaseBdev2", 00:10:09.409 "uuid": "119ac189-4d88-4341-8f55-1bef9080e4a6", 00:10:09.409 "is_configured": true, 00:10:09.409 "data_offset": 0, 00:10:09.409 "data_size": 65536 00:10:09.409 }, 00:10:09.409 { 00:10:09.409 "name": "BaseBdev3", 00:10:09.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.409 "is_configured": false, 00:10:09.409 "data_offset": 0, 00:10:09.409 "data_size": 0 00:10:09.409 }, 00:10:09.409 { 00:10:09.409 "name": "BaseBdev4", 00:10:09.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.409 "is_configured": false, 00:10:09.409 "data_offset": 0, 00:10:09.409 "data_size": 0 00:10:09.409 } 00:10:09.409 ] 00:10:09.409 }' 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.409 13:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.668 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:09.668 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.668 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.668 [2024-11-20 13:23:51.300664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.668 BaseBdev3 00:10:09.668 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.668 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:09.668 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:09.668 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.668 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:09.668 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.669 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.669 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.669 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.669 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.669 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.669 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.669 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.669 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.669 [ 00:10:09.669 { 00:10:09.669 "name": "BaseBdev3", 00:10:09.669 "aliases": [ 00:10:09.669 "7082c8cb-b8e8-42fb-8fb1-1fac159b10a9" 00:10:09.669 ], 00:10:09.669 "product_name": "Malloc disk", 00:10:09.669 "block_size": 512, 00:10:09.669 "num_blocks": 65536, 00:10:09.669 "uuid": "7082c8cb-b8e8-42fb-8fb1-1fac159b10a9", 00:10:09.669 "assigned_rate_limits": { 00:10:09.669 "rw_ios_per_sec": 0, 00:10:09.669 "rw_mbytes_per_sec": 0, 00:10:09.669 "r_mbytes_per_sec": 0, 00:10:09.669 "w_mbytes_per_sec": 0 00:10:09.669 }, 00:10:09.669 "claimed": true, 00:10:09.669 "claim_type": "exclusive_write", 00:10:09.669 "zoned": false, 00:10:09.669 "supported_io_types": { 00:10:09.669 "read": true, 00:10:09.669 "write": true, 00:10:09.669 "unmap": true, 00:10:09.669 "flush": true, 00:10:09.669 "reset": true, 00:10:09.669 "nvme_admin": false, 00:10:09.669 "nvme_io": false, 00:10:09.669 "nvme_io_md": false, 00:10:09.669 "write_zeroes": true, 00:10:09.669 "zcopy": true, 00:10:09.669 "get_zone_info": false, 00:10:09.669 "zone_management": false, 00:10:09.669 "zone_append": false, 00:10:09.669 "compare": false, 00:10:09.669 "compare_and_write": false, 00:10:09.669 "abort": true, 00:10:09.669 "seek_hole": false, 00:10:09.669 "seek_data": false, 00:10:09.669 "copy": true, 00:10:09.669 "nvme_iov_md": false 00:10:09.669 }, 00:10:09.929 "memory_domains": [ 00:10:09.929 { 00:10:09.929 "dma_device_id": "system", 00:10:09.929 "dma_device_type": 1 00:10:09.929 }, 00:10:09.929 { 00:10:09.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.929 "dma_device_type": 2 00:10:09.929 } 00:10:09.929 ], 00:10:09.929 "driver_specific": {} 00:10:09.929 } 00:10:09.929 ] 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.929 "name": "Existed_Raid", 00:10:09.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.929 "strip_size_kb": 64, 00:10:09.929 "state": "configuring", 00:10:09.929 "raid_level": "concat", 00:10:09.929 "superblock": false, 00:10:09.929 "num_base_bdevs": 4, 00:10:09.929 "num_base_bdevs_discovered": 3, 00:10:09.929 "num_base_bdevs_operational": 4, 00:10:09.929 "base_bdevs_list": [ 00:10:09.929 { 00:10:09.929 "name": "BaseBdev1", 00:10:09.929 "uuid": "0a260600-553b-4f34-b274-59ffbccd64d5", 00:10:09.929 "is_configured": true, 00:10:09.929 "data_offset": 0, 00:10:09.929 "data_size": 65536 00:10:09.929 }, 00:10:09.929 { 00:10:09.929 "name": "BaseBdev2", 00:10:09.929 "uuid": "119ac189-4d88-4341-8f55-1bef9080e4a6", 00:10:09.929 "is_configured": true, 00:10:09.929 "data_offset": 0, 00:10:09.929 "data_size": 65536 00:10:09.929 }, 00:10:09.929 { 00:10:09.929 "name": "BaseBdev3", 00:10:09.929 "uuid": "7082c8cb-b8e8-42fb-8fb1-1fac159b10a9", 00:10:09.929 "is_configured": true, 00:10:09.929 "data_offset": 0, 00:10:09.929 "data_size": 65536 00:10:09.929 }, 00:10:09.929 { 00:10:09.929 "name": "BaseBdev4", 00:10:09.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.929 "is_configured": false, 00:10:09.929 "data_offset": 0, 00:10:09.929 "data_size": 0 00:10:09.929 } 00:10:09.929 ] 00:10:09.929 }' 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.929 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.190 [2024-11-20 13:23:51.767364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:10.190 [2024-11-20 13:23:51.767533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:10.190 [2024-11-20 13:23:51.767564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:10.190 [2024-11-20 13:23:51.767934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:10.190 [2024-11-20 13:23:51.768145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:10.190 [2024-11-20 13:23:51.768202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:10.190 [2024-11-20 13:23:51.768499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.190 BaseBdev4 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.190 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.190 [ 00:10:10.190 { 00:10:10.190 "name": "BaseBdev4", 00:10:10.190 "aliases": [ 00:10:10.190 "24ccffab-47f2-46b3-8090-415f4d8bba97" 00:10:10.190 ], 00:10:10.190 "product_name": "Malloc disk", 00:10:10.190 "block_size": 512, 00:10:10.190 "num_blocks": 65536, 00:10:10.190 "uuid": "24ccffab-47f2-46b3-8090-415f4d8bba97", 00:10:10.190 "assigned_rate_limits": { 00:10:10.190 "rw_ios_per_sec": 0, 00:10:10.190 "rw_mbytes_per_sec": 0, 00:10:10.190 "r_mbytes_per_sec": 0, 00:10:10.190 "w_mbytes_per_sec": 0 00:10:10.190 }, 00:10:10.190 "claimed": true, 00:10:10.190 "claim_type": "exclusive_write", 00:10:10.190 "zoned": false, 00:10:10.190 "supported_io_types": { 00:10:10.190 "read": true, 00:10:10.190 "write": true, 00:10:10.190 "unmap": true, 00:10:10.190 "flush": true, 00:10:10.190 "reset": true, 00:10:10.190 "nvme_admin": false, 00:10:10.190 "nvme_io": false, 00:10:10.190 "nvme_io_md": false, 00:10:10.190 "write_zeroes": true, 00:10:10.190 "zcopy": true, 00:10:10.190 "get_zone_info": false, 00:10:10.190 "zone_management": false, 00:10:10.190 "zone_append": false, 00:10:10.190 "compare": false, 00:10:10.190 "compare_and_write": false, 00:10:10.190 "abort": true, 00:10:10.190 "seek_hole": false, 00:10:10.190 "seek_data": false, 00:10:10.190 "copy": true, 00:10:10.190 "nvme_iov_md": false 00:10:10.190 }, 00:10:10.190 "memory_domains": [ 00:10:10.190 { 00:10:10.190 "dma_device_id": "system", 00:10:10.190 "dma_device_type": 1 00:10:10.191 }, 00:10:10.191 { 00:10:10.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.191 "dma_device_type": 2 00:10:10.191 } 00:10:10.191 ], 00:10:10.191 "driver_specific": {} 00:10:10.191 } 00:10:10.191 ] 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.191 "name": "Existed_Raid", 00:10:10.191 "uuid": "43fdee38-7f86-4ea6-9e36-6a7895a2ccd3", 00:10:10.191 "strip_size_kb": 64, 00:10:10.191 "state": "online", 00:10:10.191 "raid_level": "concat", 00:10:10.191 "superblock": false, 00:10:10.191 "num_base_bdevs": 4, 00:10:10.191 "num_base_bdevs_discovered": 4, 00:10:10.191 "num_base_bdevs_operational": 4, 00:10:10.191 "base_bdevs_list": [ 00:10:10.191 { 00:10:10.191 "name": "BaseBdev1", 00:10:10.191 "uuid": "0a260600-553b-4f34-b274-59ffbccd64d5", 00:10:10.191 "is_configured": true, 00:10:10.191 "data_offset": 0, 00:10:10.191 "data_size": 65536 00:10:10.191 }, 00:10:10.191 { 00:10:10.191 "name": "BaseBdev2", 00:10:10.191 "uuid": "119ac189-4d88-4341-8f55-1bef9080e4a6", 00:10:10.191 "is_configured": true, 00:10:10.191 "data_offset": 0, 00:10:10.191 "data_size": 65536 00:10:10.191 }, 00:10:10.191 { 00:10:10.191 "name": "BaseBdev3", 00:10:10.191 "uuid": "7082c8cb-b8e8-42fb-8fb1-1fac159b10a9", 00:10:10.191 "is_configured": true, 00:10:10.191 "data_offset": 0, 00:10:10.191 "data_size": 65536 00:10:10.191 }, 00:10:10.191 { 00:10:10.191 "name": "BaseBdev4", 00:10:10.191 "uuid": "24ccffab-47f2-46b3-8090-415f4d8bba97", 00:10:10.191 "is_configured": true, 00:10:10.191 "data_offset": 0, 00:10:10.191 "data_size": 65536 00:10:10.191 } 00:10:10.191 ] 00:10:10.191 }' 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.191 13:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.760 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.760 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.760 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.760 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.760 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.760 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.760 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.761 [2024-11-20 13:23:52.227009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.761 "name": "Existed_Raid", 00:10:10.761 "aliases": [ 00:10:10.761 "43fdee38-7f86-4ea6-9e36-6a7895a2ccd3" 00:10:10.761 ], 00:10:10.761 "product_name": "Raid Volume", 00:10:10.761 "block_size": 512, 00:10:10.761 "num_blocks": 262144, 00:10:10.761 "uuid": "43fdee38-7f86-4ea6-9e36-6a7895a2ccd3", 00:10:10.761 "assigned_rate_limits": { 00:10:10.761 "rw_ios_per_sec": 0, 00:10:10.761 "rw_mbytes_per_sec": 0, 00:10:10.761 "r_mbytes_per_sec": 0, 00:10:10.761 "w_mbytes_per_sec": 0 00:10:10.761 }, 00:10:10.761 "claimed": false, 00:10:10.761 "zoned": false, 00:10:10.761 "supported_io_types": { 00:10:10.761 "read": true, 00:10:10.761 "write": true, 00:10:10.761 "unmap": true, 00:10:10.761 "flush": true, 00:10:10.761 "reset": true, 00:10:10.761 "nvme_admin": false, 00:10:10.761 "nvme_io": false, 00:10:10.761 "nvme_io_md": false, 00:10:10.761 "write_zeroes": true, 00:10:10.761 "zcopy": false, 00:10:10.761 "get_zone_info": false, 00:10:10.761 "zone_management": false, 00:10:10.761 "zone_append": false, 00:10:10.761 "compare": false, 00:10:10.761 "compare_and_write": false, 00:10:10.761 "abort": false, 00:10:10.761 "seek_hole": false, 00:10:10.761 "seek_data": false, 00:10:10.761 "copy": false, 00:10:10.761 "nvme_iov_md": false 00:10:10.761 }, 00:10:10.761 "memory_domains": [ 00:10:10.761 { 00:10:10.761 "dma_device_id": "system", 00:10:10.761 "dma_device_type": 1 00:10:10.761 }, 00:10:10.761 { 00:10:10.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.761 "dma_device_type": 2 00:10:10.761 }, 00:10:10.761 { 00:10:10.761 "dma_device_id": "system", 00:10:10.761 "dma_device_type": 1 00:10:10.761 }, 00:10:10.761 { 00:10:10.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.761 "dma_device_type": 2 00:10:10.761 }, 00:10:10.761 { 00:10:10.761 "dma_device_id": "system", 00:10:10.761 "dma_device_type": 1 00:10:10.761 }, 00:10:10.761 { 00:10:10.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.761 "dma_device_type": 2 00:10:10.761 }, 00:10:10.761 { 00:10:10.761 "dma_device_id": "system", 00:10:10.761 "dma_device_type": 1 00:10:10.761 }, 00:10:10.761 { 00:10:10.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.761 "dma_device_type": 2 00:10:10.761 } 00:10:10.761 ], 00:10:10.761 "driver_specific": { 00:10:10.761 "raid": { 00:10:10.761 "uuid": "43fdee38-7f86-4ea6-9e36-6a7895a2ccd3", 00:10:10.761 "strip_size_kb": 64, 00:10:10.761 "state": "online", 00:10:10.761 "raid_level": "concat", 00:10:10.761 "superblock": false, 00:10:10.761 "num_base_bdevs": 4, 00:10:10.761 "num_base_bdevs_discovered": 4, 00:10:10.761 "num_base_bdevs_operational": 4, 00:10:10.761 "base_bdevs_list": [ 00:10:10.761 { 00:10:10.761 "name": "BaseBdev1", 00:10:10.761 "uuid": "0a260600-553b-4f34-b274-59ffbccd64d5", 00:10:10.761 "is_configured": true, 00:10:10.761 "data_offset": 0, 00:10:10.761 "data_size": 65536 00:10:10.761 }, 00:10:10.761 { 00:10:10.761 "name": "BaseBdev2", 00:10:10.761 "uuid": "119ac189-4d88-4341-8f55-1bef9080e4a6", 00:10:10.761 "is_configured": true, 00:10:10.761 "data_offset": 0, 00:10:10.761 "data_size": 65536 00:10:10.761 }, 00:10:10.761 { 00:10:10.761 "name": "BaseBdev3", 00:10:10.761 "uuid": "7082c8cb-b8e8-42fb-8fb1-1fac159b10a9", 00:10:10.761 "is_configured": true, 00:10:10.761 "data_offset": 0, 00:10:10.761 "data_size": 65536 00:10:10.761 }, 00:10:10.761 { 00:10:10.761 "name": "BaseBdev4", 00:10:10.761 "uuid": "24ccffab-47f2-46b3-8090-415f4d8bba97", 00:10:10.761 "is_configured": true, 00:10:10.761 "data_offset": 0, 00:10:10.761 "data_size": 65536 00:10:10.761 } 00:10:10.761 ] 00:10:10.761 } 00:10:10.761 } 00:10:10.761 }' 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:10.761 BaseBdev2 00:10:10.761 BaseBdev3 00:10:10.761 BaseBdev4' 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.761 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.022 [2024-11-20 13:23:52.534210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.022 [2024-11-20 13:23:52.534246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.022 [2024-11-20 13:23:52.534310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.022 "name": "Existed_Raid", 00:10:11.022 "uuid": "43fdee38-7f86-4ea6-9e36-6a7895a2ccd3", 00:10:11.022 "strip_size_kb": 64, 00:10:11.022 "state": "offline", 00:10:11.022 "raid_level": "concat", 00:10:11.022 "superblock": false, 00:10:11.022 "num_base_bdevs": 4, 00:10:11.022 "num_base_bdevs_discovered": 3, 00:10:11.022 "num_base_bdevs_operational": 3, 00:10:11.022 "base_bdevs_list": [ 00:10:11.022 { 00:10:11.022 "name": null, 00:10:11.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.022 "is_configured": false, 00:10:11.022 "data_offset": 0, 00:10:11.022 "data_size": 65536 00:10:11.022 }, 00:10:11.022 { 00:10:11.022 "name": "BaseBdev2", 00:10:11.022 "uuid": "119ac189-4d88-4341-8f55-1bef9080e4a6", 00:10:11.022 "is_configured": true, 00:10:11.022 "data_offset": 0, 00:10:11.022 "data_size": 65536 00:10:11.022 }, 00:10:11.022 { 00:10:11.022 "name": "BaseBdev3", 00:10:11.022 "uuid": "7082c8cb-b8e8-42fb-8fb1-1fac159b10a9", 00:10:11.022 "is_configured": true, 00:10:11.022 "data_offset": 0, 00:10:11.022 "data_size": 65536 00:10:11.022 }, 00:10:11.022 { 00:10:11.022 "name": "BaseBdev4", 00:10:11.022 "uuid": "24ccffab-47f2-46b3-8090-415f4d8bba97", 00:10:11.022 "is_configured": true, 00:10:11.022 "data_offset": 0, 00:10:11.022 "data_size": 65536 00:10:11.022 } 00:10:11.022 ] 00:10:11.022 }' 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.022 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.283 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:11.283 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.283 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.283 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.283 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.283 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.542 [2024-11-20 13:23:52.969386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.542 13:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.542 [2024-11-20 13:23:53.045923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.542 [2024-11-20 13:23:53.121040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:11.542 [2024-11-20 13:23:53.121142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:11.542 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.543 BaseBdev2 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.543 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.804 [ 00:10:11.804 { 00:10:11.804 "name": "BaseBdev2", 00:10:11.804 "aliases": [ 00:10:11.804 "bfa7fccc-b4b0-4d9c-a902-849c4a372492" 00:10:11.804 ], 00:10:11.804 "product_name": "Malloc disk", 00:10:11.804 "block_size": 512, 00:10:11.804 "num_blocks": 65536, 00:10:11.804 "uuid": "bfa7fccc-b4b0-4d9c-a902-849c4a372492", 00:10:11.804 "assigned_rate_limits": { 00:10:11.804 "rw_ios_per_sec": 0, 00:10:11.804 "rw_mbytes_per_sec": 0, 00:10:11.804 "r_mbytes_per_sec": 0, 00:10:11.804 "w_mbytes_per_sec": 0 00:10:11.804 }, 00:10:11.804 "claimed": false, 00:10:11.804 "zoned": false, 00:10:11.804 "supported_io_types": { 00:10:11.804 "read": true, 00:10:11.804 "write": true, 00:10:11.804 "unmap": true, 00:10:11.804 "flush": true, 00:10:11.804 "reset": true, 00:10:11.804 "nvme_admin": false, 00:10:11.804 "nvme_io": false, 00:10:11.804 "nvme_io_md": false, 00:10:11.804 "write_zeroes": true, 00:10:11.804 "zcopy": true, 00:10:11.804 "get_zone_info": false, 00:10:11.804 "zone_management": false, 00:10:11.804 "zone_append": false, 00:10:11.804 "compare": false, 00:10:11.804 "compare_and_write": false, 00:10:11.804 "abort": true, 00:10:11.804 "seek_hole": false, 00:10:11.804 "seek_data": false, 00:10:11.804 "copy": true, 00:10:11.804 "nvme_iov_md": false 00:10:11.804 }, 00:10:11.804 "memory_domains": [ 00:10:11.804 { 00:10:11.804 "dma_device_id": "system", 00:10:11.804 "dma_device_type": 1 00:10:11.804 }, 00:10:11.804 { 00:10:11.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.804 "dma_device_type": 2 00:10:11.804 } 00:10:11.804 ], 00:10:11.804 "driver_specific": {} 00:10:11.804 } 00:10:11.804 ] 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.804 BaseBdev3 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.804 [ 00:10:11.804 { 00:10:11.804 "name": "BaseBdev3", 00:10:11.804 "aliases": [ 00:10:11.804 "394f2c12-050f-40b0-8645-dc22dd45c104" 00:10:11.804 ], 00:10:11.804 "product_name": "Malloc disk", 00:10:11.804 "block_size": 512, 00:10:11.804 "num_blocks": 65536, 00:10:11.804 "uuid": "394f2c12-050f-40b0-8645-dc22dd45c104", 00:10:11.804 "assigned_rate_limits": { 00:10:11.804 "rw_ios_per_sec": 0, 00:10:11.804 "rw_mbytes_per_sec": 0, 00:10:11.804 "r_mbytes_per_sec": 0, 00:10:11.804 "w_mbytes_per_sec": 0 00:10:11.804 }, 00:10:11.804 "claimed": false, 00:10:11.804 "zoned": false, 00:10:11.804 "supported_io_types": { 00:10:11.804 "read": true, 00:10:11.804 "write": true, 00:10:11.804 "unmap": true, 00:10:11.804 "flush": true, 00:10:11.804 "reset": true, 00:10:11.804 "nvme_admin": false, 00:10:11.804 "nvme_io": false, 00:10:11.804 "nvme_io_md": false, 00:10:11.804 "write_zeroes": true, 00:10:11.804 "zcopy": true, 00:10:11.804 "get_zone_info": false, 00:10:11.804 "zone_management": false, 00:10:11.804 "zone_append": false, 00:10:11.804 "compare": false, 00:10:11.804 "compare_and_write": false, 00:10:11.804 "abort": true, 00:10:11.804 "seek_hole": false, 00:10:11.804 "seek_data": false, 00:10:11.804 "copy": true, 00:10:11.804 "nvme_iov_md": false 00:10:11.804 }, 00:10:11.804 "memory_domains": [ 00:10:11.804 { 00:10:11.804 "dma_device_id": "system", 00:10:11.804 "dma_device_type": 1 00:10:11.804 }, 00:10:11.804 { 00:10:11.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.804 "dma_device_type": 2 00:10:11.804 } 00:10:11.804 ], 00:10:11.804 "driver_specific": {} 00:10:11.804 } 00:10:11.804 ] 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.804 BaseBdev4 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.804 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.805 [ 00:10:11.805 { 00:10:11.805 "name": "BaseBdev4", 00:10:11.805 "aliases": [ 00:10:11.805 "ca237db6-c09c-4f2b-ba95-6d86ec610c53" 00:10:11.805 ], 00:10:11.805 "product_name": "Malloc disk", 00:10:11.805 "block_size": 512, 00:10:11.805 "num_blocks": 65536, 00:10:11.805 "uuid": "ca237db6-c09c-4f2b-ba95-6d86ec610c53", 00:10:11.805 "assigned_rate_limits": { 00:10:11.805 "rw_ios_per_sec": 0, 00:10:11.805 "rw_mbytes_per_sec": 0, 00:10:11.805 "r_mbytes_per_sec": 0, 00:10:11.805 "w_mbytes_per_sec": 0 00:10:11.805 }, 00:10:11.805 "claimed": false, 00:10:11.805 "zoned": false, 00:10:11.805 "supported_io_types": { 00:10:11.805 "read": true, 00:10:11.805 "write": true, 00:10:11.805 "unmap": true, 00:10:11.805 "flush": true, 00:10:11.805 "reset": true, 00:10:11.805 "nvme_admin": false, 00:10:11.805 "nvme_io": false, 00:10:11.805 "nvme_io_md": false, 00:10:11.805 "write_zeroes": true, 00:10:11.805 "zcopy": true, 00:10:11.805 "get_zone_info": false, 00:10:11.805 "zone_management": false, 00:10:11.805 "zone_append": false, 00:10:11.805 "compare": false, 00:10:11.805 "compare_and_write": false, 00:10:11.805 "abort": true, 00:10:11.805 "seek_hole": false, 00:10:11.805 "seek_data": false, 00:10:11.805 "copy": true, 00:10:11.805 "nvme_iov_md": false 00:10:11.805 }, 00:10:11.805 "memory_domains": [ 00:10:11.805 { 00:10:11.805 "dma_device_id": "system", 00:10:11.805 "dma_device_type": 1 00:10:11.805 }, 00:10:11.805 { 00:10:11.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.805 "dma_device_type": 2 00:10:11.805 } 00:10:11.805 ], 00:10:11.805 "driver_specific": {} 00:10:11.805 } 00:10:11.805 ] 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.805 [2024-11-20 13:23:53.349919] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.805 [2024-11-20 13:23:53.349965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.805 [2024-11-20 13:23:53.350037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.805 [2024-11-20 13:23:53.351866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.805 [2024-11-20 13:23:53.351916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.805 "name": "Existed_Raid", 00:10:11.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.805 "strip_size_kb": 64, 00:10:11.805 "state": "configuring", 00:10:11.805 "raid_level": "concat", 00:10:11.805 "superblock": false, 00:10:11.805 "num_base_bdevs": 4, 00:10:11.805 "num_base_bdevs_discovered": 3, 00:10:11.805 "num_base_bdevs_operational": 4, 00:10:11.805 "base_bdevs_list": [ 00:10:11.805 { 00:10:11.805 "name": "BaseBdev1", 00:10:11.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.805 "is_configured": false, 00:10:11.805 "data_offset": 0, 00:10:11.805 "data_size": 0 00:10:11.805 }, 00:10:11.805 { 00:10:11.805 "name": "BaseBdev2", 00:10:11.805 "uuid": "bfa7fccc-b4b0-4d9c-a902-849c4a372492", 00:10:11.805 "is_configured": true, 00:10:11.805 "data_offset": 0, 00:10:11.805 "data_size": 65536 00:10:11.805 }, 00:10:11.805 { 00:10:11.805 "name": "BaseBdev3", 00:10:11.805 "uuid": "394f2c12-050f-40b0-8645-dc22dd45c104", 00:10:11.805 "is_configured": true, 00:10:11.805 "data_offset": 0, 00:10:11.805 "data_size": 65536 00:10:11.805 }, 00:10:11.805 { 00:10:11.805 "name": "BaseBdev4", 00:10:11.805 "uuid": "ca237db6-c09c-4f2b-ba95-6d86ec610c53", 00:10:11.805 "is_configured": true, 00:10:11.805 "data_offset": 0, 00:10:11.805 "data_size": 65536 00:10:11.805 } 00:10:11.805 ] 00:10:11.805 }' 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.805 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.375 [2024-11-20 13:23:53.789190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.375 "name": "Existed_Raid", 00:10:12.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.375 "strip_size_kb": 64, 00:10:12.375 "state": "configuring", 00:10:12.375 "raid_level": "concat", 00:10:12.375 "superblock": false, 00:10:12.375 "num_base_bdevs": 4, 00:10:12.375 "num_base_bdevs_discovered": 2, 00:10:12.375 "num_base_bdevs_operational": 4, 00:10:12.375 "base_bdevs_list": [ 00:10:12.375 { 00:10:12.375 "name": "BaseBdev1", 00:10:12.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.375 "is_configured": false, 00:10:12.375 "data_offset": 0, 00:10:12.375 "data_size": 0 00:10:12.375 }, 00:10:12.375 { 00:10:12.375 "name": null, 00:10:12.375 "uuid": "bfa7fccc-b4b0-4d9c-a902-849c4a372492", 00:10:12.375 "is_configured": false, 00:10:12.375 "data_offset": 0, 00:10:12.375 "data_size": 65536 00:10:12.375 }, 00:10:12.375 { 00:10:12.375 "name": "BaseBdev3", 00:10:12.375 "uuid": "394f2c12-050f-40b0-8645-dc22dd45c104", 00:10:12.375 "is_configured": true, 00:10:12.375 "data_offset": 0, 00:10:12.375 "data_size": 65536 00:10:12.375 }, 00:10:12.375 { 00:10:12.375 "name": "BaseBdev4", 00:10:12.375 "uuid": "ca237db6-c09c-4f2b-ba95-6d86ec610c53", 00:10:12.375 "is_configured": true, 00:10:12.375 "data_offset": 0, 00:10:12.375 "data_size": 65536 00:10:12.375 } 00:10:12.375 ] 00:10:12.375 }' 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.375 13:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.635 [2024-11-20 13:23:54.251636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.635 BaseBdev1 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.635 [ 00:10:12.635 { 00:10:12.635 "name": "BaseBdev1", 00:10:12.635 "aliases": [ 00:10:12.635 "ed8d54fa-eacf-4fdf-bb3f-085682d28eae" 00:10:12.635 ], 00:10:12.635 "product_name": "Malloc disk", 00:10:12.635 "block_size": 512, 00:10:12.635 "num_blocks": 65536, 00:10:12.635 "uuid": "ed8d54fa-eacf-4fdf-bb3f-085682d28eae", 00:10:12.635 "assigned_rate_limits": { 00:10:12.635 "rw_ios_per_sec": 0, 00:10:12.635 "rw_mbytes_per_sec": 0, 00:10:12.635 "r_mbytes_per_sec": 0, 00:10:12.635 "w_mbytes_per_sec": 0 00:10:12.635 }, 00:10:12.635 "claimed": true, 00:10:12.635 "claim_type": "exclusive_write", 00:10:12.635 "zoned": false, 00:10:12.635 "supported_io_types": { 00:10:12.635 "read": true, 00:10:12.635 "write": true, 00:10:12.635 "unmap": true, 00:10:12.635 "flush": true, 00:10:12.635 "reset": true, 00:10:12.635 "nvme_admin": false, 00:10:12.635 "nvme_io": false, 00:10:12.635 "nvme_io_md": false, 00:10:12.635 "write_zeroes": true, 00:10:12.635 "zcopy": true, 00:10:12.635 "get_zone_info": false, 00:10:12.635 "zone_management": false, 00:10:12.635 "zone_append": false, 00:10:12.635 "compare": false, 00:10:12.635 "compare_and_write": false, 00:10:12.635 "abort": true, 00:10:12.635 "seek_hole": false, 00:10:12.635 "seek_data": false, 00:10:12.635 "copy": true, 00:10:12.635 "nvme_iov_md": false 00:10:12.635 }, 00:10:12.635 "memory_domains": [ 00:10:12.635 { 00:10:12.635 "dma_device_id": "system", 00:10:12.635 "dma_device_type": 1 00:10:12.635 }, 00:10:12.635 { 00:10:12.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.635 "dma_device_type": 2 00:10:12.635 } 00:10:12.635 ], 00:10:12.635 "driver_specific": {} 00:10:12.635 } 00:10:12.635 ] 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.635 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.895 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.895 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.895 "name": "Existed_Raid", 00:10:12.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.895 "strip_size_kb": 64, 00:10:12.895 "state": "configuring", 00:10:12.895 "raid_level": "concat", 00:10:12.895 "superblock": false, 00:10:12.895 "num_base_bdevs": 4, 00:10:12.895 "num_base_bdevs_discovered": 3, 00:10:12.895 "num_base_bdevs_operational": 4, 00:10:12.895 "base_bdevs_list": [ 00:10:12.895 { 00:10:12.895 "name": "BaseBdev1", 00:10:12.895 "uuid": "ed8d54fa-eacf-4fdf-bb3f-085682d28eae", 00:10:12.895 "is_configured": true, 00:10:12.895 "data_offset": 0, 00:10:12.895 "data_size": 65536 00:10:12.895 }, 00:10:12.895 { 00:10:12.895 "name": null, 00:10:12.895 "uuid": "bfa7fccc-b4b0-4d9c-a902-849c4a372492", 00:10:12.895 "is_configured": false, 00:10:12.895 "data_offset": 0, 00:10:12.895 "data_size": 65536 00:10:12.896 }, 00:10:12.896 { 00:10:12.896 "name": "BaseBdev3", 00:10:12.896 "uuid": "394f2c12-050f-40b0-8645-dc22dd45c104", 00:10:12.896 "is_configured": true, 00:10:12.896 "data_offset": 0, 00:10:12.896 "data_size": 65536 00:10:12.896 }, 00:10:12.896 { 00:10:12.896 "name": "BaseBdev4", 00:10:12.896 "uuid": "ca237db6-c09c-4f2b-ba95-6d86ec610c53", 00:10:12.896 "is_configured": true, 00:10:12.896 "data_offset": 0, 00:10:12.896 "data_size": 65536 00:10:12.896 } 00:10:12.896 ] 00:10:12.896 }' 00:10:12.896 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.896 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.156 [2024-11-20 13:23:54.726935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.156 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.156 "name": "Existed_Raid", 00:10:13.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.156 "strip_size_kb": 64, 00:10:13.156 "state": "configuring", 00:10:13.156 "raid_level": "concat", 00:10:13.156 "superblock": false, 00:10:13.156 "num_base_bdevs": 4, 00:10:13.156 "num_base_bdevs_discovered": 2, 00:10:13.156 "num_base_bdevs_operational": 4, 00:10:13.156 "base_bdevs_list": [ 00:10:13.156 { 00:10:13.156 "name": "BaseBdev1", 00:10:13.156 "uuid": "ed8d54fa-eacf-4fdf-bb3f-085682d28eae", 00:10:13.156 "is_configured": true, 00:10:13.156 "data_offset": 0, 00:10:13.156 "data_size": 65536 00:10:13.156 }, 00:10:13.156 { 00:10:13.156 "name": null, 00:10:13.156 "uuid": "bfa7fccc-b4b0-4d9c-a902-849c4a372492", 00:10:13.156 "is_configured": false, 00:10:13.156 "data_offset": 0, 00:10:13.156 "data_size": 65536 00:10:13.156 }, 00:10:13.156 { 00:10:13.156 "name": null, 00:10:13.156 "uuid": "394f2c12-050f-40b0-8645-dc22dd45c104", 00:10:13.156 "is_configured": false, 00:10:13.156 "data_offset": 0, 00:10:13.156 "data_size": 65536 00:10:13.156 }, 00:10:13.156 { 00:10:13.156 "name": "BaseBdev4", 00:10:13.156 "uuid": "ca237db6-c09c-4f2b-ba95-6d86ec610c53", 00:10:13.156 "is_configured": true, 00:10:13.157 "data_offset": 0, 00:10:13.157 "data_size": 65536 00:10:13.157 } 00:10:13.157 ] 00:10:13.157 }' 00:10:13.157 13:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.157 13:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.725 [2024-11-20 13:23:55.194157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.725 "name": "Existed_Raid", 00:10:13.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.725 "strip_size_kb": 64, 00:10:13.725 "state": "configuring", 00:10:13.725 "raid_level": "concat", 00:10:13.725 "superblock": false, 00:10:13.725 "num_base_bdevs": 4, 00:10:13.725 "num_base_bdevs_discovered": 3, 00:10:13.725 "num_base_bdevs_operational": 4, 00:10:13.725 "base_bdevs_list": [ 00:10:13.725 { 00:10:13.725 "name": "BaseBdev1", 00:10:13.725 "uuid": "ed8d54fa-eacf-4fdf-bb3f-085682d28eae", 00:10:13.725 "is_configured": true, 00:10:13.725 "data_offset": 0, 00:10:13.725 "data_size": 65536 00:10:13.725 }, 00:10:13.725 { 00:10:13.725 "name": null, 00:10:13.725 "uuid": "bfa7fccc-b4b0-4d9c-a902-849c4a372492", 00:10:13.725 "is_configured": false, 00:10:13.725 "data_offset": 0, 00:10:13.725 "data_size": 65536 00:10:13.725 }, 00:10:13.725 { 00:10:13.725 "name": "BaseBdev3", 00:10:13.725 "uuid": "394f2c12-050f-40b0-8645-dc22dd45c104", 00:10:13.725 "is_configured": true, 00:10:13.725 "data_offset": 0, 00:10:13.725 "data_size": 65536 00:10:13.725 }, 00:10:13.725 { 00:10:13.725 "name": "BaseBdev4", 00:10:13.725 "uuid": "ca237db6-c09c-4f2b-ba95-6d86ec610c53", 00:10:13.725 "is_configured": true, 00:10:13.725 "data_offset": 0, 00:10:13.725 "data_size": 65536 00:10:13.725 } 00:10:13.725 ] 00:10:13.725 }' 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.725 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.984 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:13.984 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.984 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.984 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.243 [2024-11-20 13:23:55.697352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.243 "name": "Existed_Raid", 00:10:14.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.243 "strip_size_kb": 64, 00:10:14.243 "state": "configuring", 00:10:14.243 "raid_level": "concat", 00:10:14.243 "superblock": false, 00:10:14.243 "num_base_bdevs": 4, 00:10:14.243 "num_base_bdevs_discovered": 2, 00:10:14.243 "num_base_bdevs_operational": 4, 00:10:14.243 "base_bdevs_list": [ 00:10:14.243 { 00:10:14.243 "name": null, 00:10:14.243 "uuid": "ed8d54fa-eacf-4fdf-bb3f-085682d28eae", 00:10:14.243 "is_configured": false, 00:10:14.243 "data_offset": 0, 00:10:14.243 "data_size": 65536 00:10:14.243 }, 00:10:14.243 { 00:10:14.243 "name": null, 00:10:14.243 "uuid": "bfa7fccc-b4b0-4d9c-a902-849c4a372492", 00:10:14.243 "is_configured": false, 00:10:14.243 "data_offset": 0, 00:10:14.243 "data_size": 65536 00:10:14.243 }, 00:10:14.243 { 00:10:14.243 "name": "BaseBdev3", 00:10:14.243 "uuid": "394f2c12-050f-40b0-8645-dc22dd45c104", 00:10:14.243 "is_configured": true, 00:10:14.243 "data_offset": 0, 00:10:14.243 "data_size": 65536 00:10:14.243 }, 00:10:14.243 { 00:10:14.243 "name": "BaseBdev4", 00:10:14.243 "uuid": "ca237db6-c09c-4f2b-ba95-6d86ec610c53", 00:10:14.243 "is_configured": true, 00:10:14.243 "data_offset": 0, 00:10:14.243 "data_size": 65536 00:10:14.243 } 00:10:14.243 ] 00:10:14.243 }' 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.243 13:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.595 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.595 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.595 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.595 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.595 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.855 [2024-11-20 13:23:56.183147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.855 "name": "Existed_Raid", 00:10:14.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.855 "strip_size_kb": 64, 00:10:14.855 "state": "configuring", 00:10:14.855 "raid_level": "concat", 00:10:14.855 "superblock": false, 00:10:14.855 "num_base_bdevs": 4, 00:10:14.855 "num_base_bdevs_discovered": 3, 00:10:14.855 "num_base_bdevs_operational": 4, 00:10:14.855 "base_bdevs_list": [ 00:10:14.855 { 00:10:14.855 "name": null, 00:10:14.855 "uuid": "ed8d54fa-eacf-4fdf-bb3f-085682d28eae", 00:10:14.855 "is_configured": false, 00:10:14.855 "data_offset": 0, 00:10:14.855 "data_size": 65536 00:10:14.855 }, 00:10:14.855 { 00:10:14.855 "name": "BaseBdev2", 00:10:14.855 "uuid": "bfa7fccc-b4b0-4d9c-a902-849c4a372492", 00:10:14.855 "is_configured": true, 00:10:14.855 "data_offset": 0, 00:10:14.855 "data_size": 65536 00:10:14.855 }, 00:10:14.855 { 00:10:14.855 "name": "BaseBdev3", 00:10:14.855 "uuid": "394f2c12-050f-40b0-8645-dc22dd45c104", 00:10:14.855 "is_configured": true, 00:10:14.855 "data_offset": 0, 00:10:14.855 "data_size": 65536 00:10:14.855 }, 00:10:14.855 { 00:10:14.855 "name": "BaseBdev4", 00:10:14.855 "uuid": "ca237db6-c09c-4f2b-ba95-6d86ec610c53", 00:10:14.855 "is_configured": true, 00:10:14.855 "data_offset": 0, 00:10:14.855 "data_size": 65536 00:10:14.855 } 00:10:14.855 ] 00:10:14.855 }' 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.855 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.115 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.115 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.115 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.115 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ed8d54fa-eacf-4fdf-bb3f-085682d28eae 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.116 [2024-11-20 13:23:56.733276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:15.116 [2024-11-20 13:23:56.733420] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:15.116 [2024-11-20 13:23:56.733450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:15.116 [2024-11-20 13:23:56.733758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:15.116 [2024-11-20 13:23:56.733924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:15.116 [2024-11-20 13:23:56.733971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:15.116 [2024-11-20 13:23:56.734222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.116 NewBaseBdev 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.116 [ 00:10:15.116 { 00:10:15.116 "name": "NewBaseBdev", 00:10:15.116 "aliases": [ 00:10:15.116 "ed8d54fa-eacf-4fdf-bb3f-085682d28eae" 00:10:15.116 ], 00:10:15.116 "product_name": "Malloc disk", 00:10:15.116 "block_size": 512, 00:10:15.116 "num_blocks": 65536, 00:10:15.116 "uuid": "ed8d54fa-eacf-4fdf-bb3f-085682d28eae", 00:10:15.116 "assigned_rate_limits": { 00:10:15.116 "rw_ios_per_sec": 0, 00:10:15.116 "rw_mbytes_per_sec": 0, 00:10:15.116 "r_mbytes_per_sec": 0, 00:10:15.116 "w_mbytes_per_sec": 0 00:10:15.116 }, 00:10:15.116 "claimed": true, 00:10:15.116 "claim_type": "exclusive_write", 00:10:15.116 "zoned": false, 00:10:15.116 "supported_io_types": { 00:10:15.116 "read": true, 00:10:15.116 "write": true, 00:10:15.116 "unmap": true, 00:10:15.116 "flush": true, 00:10:15.116 "reset": true, 00:10:15.116 "nvme_admin": false, 00:10:15.116 "nvme_io": false, 00:10:15.116 "nvme_io_md": false, 00:10:15.116 "write_zeroes": true, 00:10:15.116 "zcopy": true, 00:10:15.116 "get_zone_info": false, 00:10:15.116 "zone_management": false, 00:10:15.116 "zone_append": false, 00:10:15.116 "compare": false, 00:10:15.116 "compare_and_write": false, 00:10:15.116 "abort": true, 00:10:15.116 "seek_hole": false, 00:10:15.116 "seek_data": false, 00:10:15.116 "copy": true, 00:10:15.116 "nvme_iov_md": false 00:10:15.116 }, 00:10:15.116 "memory_domains": [ 00:10:15.116 { 00:10:15.116 "dma_device_id": "system", 00:10:15.116 "dma_device_type": 1 00:10:15.116 }, 00:10:15.116 { 00:10:15.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.116 "dma_device_type": 2 00:10:15.116 } 00:10:15.116 ], 00:10:15.116 "driver_specific": {} 00:10:15.116 } 00:10:15.116 ] 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.116 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.376 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.376 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.376 "name": "Existed_Raid", 00:10:15.376 "uuid": "b2a55dd1-934b-474a-93fc-4cf135d7d388", 00:10:15.376 "strip_size_kb": 64, 00:10:15.376 "state": "online", 00:10:15.376 "raid_level": "concat", 00:10:15.376 "superblock": false, 00:10:15.376 "num_base_bdevs": 4, 00:10:15.376 "num_base_bdevs_discovered": 4, 00:10:15.376 "num_base_bdevs_operational": 4, 00:10:15.376 "base_bdevs_list": [ 00:10:15.376 { 00:10:15.376 "name": "NewBaseBdev", 00:10:15.376 "uuid": "ed8d54fa-eacf-4fdf-bb3f-085682d28eae", 00:10:15.376 "is_configured": true, 00:10:15.376 "data_offset": 0, 00:10:15.376 "data_size": 65536 00:10:15.376 }, 00:10:15.376 { 00:10:15.376 "name": "BaseBdev2", 00:10:15.376 "uuid": "bfa7fccc-b4b0-4d9c-a902-849c4a372492", 00:10:15.376 "is_configured": true, 00:10:15.376 "data_offset": 0, 00:10:15.376 "data_size": 65536 00:10:15.376 }, 00:10:15.376 { 00:10:15.376 "name": "BaseBdev3", 00:10:15.376 "uuid": "394f2c12-050f-40b0-8645-dc22dd45c104", 00:10:15.376 "is_configured": true, 00:10:15.376 "data_offset": 0, 00:10:15.376 "data_size": 65536 00:10:15.376 }, 00:10:15.376 { 00:10:15.376 "name": "BaseBdev4", 00:10:15.376 "uuid": "ca237db6-c09c-4f2b-ba95-6d86ec610c53", 00:10:15.376 "is_configured": true, 00:10:15.376 "data_offset": 0, 00:10:15.376 "data_size": 65536 00:10:15.376 } 00:10:15.376 ] 00:10:15.376 }' 00:10:15.376 13:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.376 13:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.636 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.636 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.636 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.636 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.636 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.636 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.637 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.637 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.637 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.637 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.637 [2024-11-20 13:23:57.228827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.637 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.637 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.637 "name": "Existed_Raid", 00:10:15.637 "aliases": [ 00:10:15.637 "b2a55dd1-934b-474a-93fc-4cf135d7d388" 00:10:15.637 ], 00:10:15.637 "product_name": "Raid Volume", 00:10:15.637 "block_size": 512, 00:10:15.637 "num_blocks": 262144, 00:10:15.637 "uuid": "b2a55dd1-934b-474a-93fc-4cf135d7d388", 00:10:15.637 "assigned_rate_limits": { 00:10:15.637 "rw_ios_per_sec": 0, 00:10:15.637 "rw_mbytes_per_sec": 0, 00:10:15.637 "r_mbytes_per_sec": 0, 00:10:15.637 "w_mbytes_per_sec": 0 00:10:15.637 }, 00:10:15.637 "claimed": false, 00:10:15.637 "zoned": false, 00:10:15.637 "supported_io_types": { 00:10:15.637 "read": true, 00:10:15.637 "write": true, 00:10:15.637 "unmap": true, 00:10:15.637 "flush": true, 00:10:15.637 "reset": true, 00:10:15.637 "nvme_admin": false, 00:10:15.637 "nvme_io": false, 00:10:15.637 "nvme_io_md": false, 00:10:15.637 "write_zeroes": true, 00:10:15.637 "zcopy": false, 00:10:15.637 "get_zone_info": false, 00:10:15.637 "zone_management": false, 00:10:15.637 "zone_append": false, 00:10:15.637 "compare": false, 00:10:15.637 "compare_and_write": false, 00:10:15.637 "abort": false, 00:10:15.637 "seek_hole": false, 00:10:15.637 "seek_data": false, 00:10:15.637 "copy": false, 00:10:15.637 "nvme_iov_md": false 00:10:15.637 }, 00:10:15.637 "memory_domains": [ 00:10:15.637 { 00:10:15.637 "dma_device_id": "system", 00:10:15.637 "dma_device_type": 1 00:10:15.637 }, 00:10:15.637 { 00:10:15.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.637 "dma_device_type": 2 00:10:15.637 }, 00:10:15.637 { 00:10:15.637 "dma_device_id": "system", 00:10:15.637 "dma_device_type": 1 00:10:15.637 }, 00:10:15.637 { 00:10:15.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.637 "dma_device_type": 2 00:10:15.637 }, 00:10:15.637 { 00:10:15.637 "dma_device_id": "system", 00:10:15.637 "dma_device_type": 1 00:10:15.637 }, 00:10:15.637 { 00:10:15.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.637 "dma_device_type": 2 00:10:15.637 }, 00:10:15.637 { 00:10:15.637 "dma_device_id": "system", 00:10:15.637 "dma_device_type": 1 00:10:15.637 }, 00:10:15.637 { 00:10:15.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.637 "dma_device_type": 2 00:10:15.637 } 00:10:15.637 ], 00:10:15.637 "driver_specific": { 00:10:15.637 "raid": { 00:10:15.637 "uuid": "b2a55dd1-934b-474a-93fc-4cf135d7d388", 00:10:15.637 "strip_size_kb": 64, 00:10:15.637 "state": "online", 00:10:15.637 "raid_level": "concat", 00:10:15.637 "superblock": false, 00:10:15.637 "num_base_bdevs": 4, 00:10:15.637 "num_base_bdevs_discovered": 4, 00:10:15.637 "num_base_bdevs_operational": 4, 00:10:15.637 "base_bdevs_list": [ 00:10:15.637 { 00:10:15.637 "name": "NewBaseBdev", 00:10:15.637 "uuid": "ed8d54fa-eacf-4fdf-bb3f-085682d28eae", 00:10:15.637 "is_configured": true, 00:10:15.637 "data_offset": 0, 00:10:15.637 "data_size": 65536 00:10:15.637 }, 00:10:15.637 { 00:10:15.637 "name": "BaseBdev2", 00:10:15.637 "uuid": "bfa7fccc-b4b0-4d9c-a902-849c4a372492", 00:10:15.637 "is_configured": true, 00:10:15.637 "data_offset": 0, 00:10:15.637 "data_size": 65536 00:10:15.637 }, 00:10:15.637 { 00:10:15.637 "name": "BaseBdev3", 00:10:15.637 "uuid": "394f2c12-050f-40b0-8645-dc22dd45c104", 00:10:15.637 "is_configured": true, 00:10:15.637 "data_offset": 0, 00:10:15.637 "data_size": 65536 00:10:15.637 }, 00:10:15.637 { 00:10:15.637 "name": "BaseBdev4", 00:10:15.637 "uuid": "ca237db6-c09c-4f2b-ba95-6d86ec610c53", 00:10:15.637 "is_configured": true, 00:10:15.637 "data_offset": 0, 00:10:15.637 "data_size": 65536 00:10:15.637 } 00:10:15.637 ] 00:10:15.637 } 00:10:15.637 } 00:10:15.637 }' 00:10:15.637 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:15.897 BaseBdev2 00:10:15.897 BaseBdev3 00:10:15.897 BaseBdev4' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.897 [2024-11-20 13:23:57.527962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.897 [2024-11-20 13:23:57.528045] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.897 [2024-11-20 13:23:57.528143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.897 [2024-11-20 13:23:57.528224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.897 [2024-11-20 13:23:57.528249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81865 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 81865 ']' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 81865 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.897 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81865 00:10:16.157 killing process with pid 81865 00:10:16.157 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.157 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.157 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81865' 00:10:16.157 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 81865 00:10:16.158 [2024-11-20 13:23:57.574754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.158 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 81865 00:10:16.158 [2024-11-20 13:23:57.614445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.158 13:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:16.158 00:10:16.158 real 0m9.375s 00:10:16.158 user 0m16.071s 00:10:16.158 sys 0m1.865s 00:10:16.158 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.158 13:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.158 ************************************ 00:10:16.158 END TEST raid_state_function_test 00:10:16.158 ************************************ 00:10:16.417 13:23:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:16.417 13:23:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:16.417 13:23:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.417 13:23:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.417 ************************************ 00:10:16.417 START TEST raid_state_function_test_sb 00:10:16.417 ************************************ 00:10:16.417 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:16.417 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:16.417 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:16.417 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:16.417 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:16.417 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:16.417 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.417 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82515 00:10:16.418 Process raid pid: 82515 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82515' 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82515 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82515 ']' 00:10:16.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.418 13:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.418 [2024-11-20 13:23:57.984880] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:16.418 [2024-11-20 13:23:57.985067] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.678 [2024-11-20 13:23:58.125937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.678 [2024-11-20 13:23:58.151225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.678 [2024-11-20 13:23:58.193799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.678 [2024-11-20 13:23:58.193918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.246 [2024-11-20 13:23:58.807378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.246 [2024-11-20 13:23:58.807493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.246 [2024-11-20 13:23:58.807526] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.246 [2024-11-20 13:23:58.807551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.246 [2024-11-20 13:23:58.807569] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.246 [2024-11-20 13:23:58.807601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.246 [2024-11-20 13:23:58.807620] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.246 [2024-11-20 13:23:58.807681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.246 "name": "Existed_Raid", 00:10:17.246 "uuid": "80a82346-2704-4ff9-a30f-f637a5f465e7", 00:10:17.246 "strip_size_kb": 64, 00:10:17.246 "state": "configuring", 00:10:17.246 "raid_level": "concat", 00:10:17.246 "superblock": true, 00:10:17.246 "num_base_bdevs": 4, 00:10:17.246 "num_base_bdevs_discovered": 0, 00:10:17.246 "num_base_bdevs_operational": 4, 00:10:17.246 "base_bdevs_list": [ 00:10:17.246 { 00:10:17.246 "name": "BaseBdev1", 00:10:17.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.246 "is_configured": false, 00:10:17.246 "data_offset": 0, 00:10:17.246 "data_size": 0 00:10:17.246 }, 00:10:17.246 { 00:10:17.246 "name": "BaseBdev2", 00:10:17.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.246 "is_configured": false, 00:10:17.246 "data_offset": 0, 00:10:17.246 "data_size": 0 00:10:17.246 }, 00:10:17.246 { 00:10:17.246 "name": "BaseBdev3", 00:10:17.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.246 "is_configured": false, 00:10:17.246 "data_offset": 0, 00:10:17.246 "data_size": 0 00:10:17.246 }, 00:10:17.246 { 00:10:17.246 "name": "BaseBdev4", 00:10:17.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.246 "is_configured": false, 00:10:17.246 "data_offset": 0, 00:10:17.246 "data_size": 0 00:10:17.246 } 00:10:17.246 ] 00:10:17.246 }' 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.246 13:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.814 [2024-11-20 13:23:59.222556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.814 [2024-11-20 13:23:59.222645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.814 [2024-11-20 13:23:59.234561] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.814 [2024-11-20 13:23:59.234606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.814 [2024-11-20 13:23:59.234614] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.814 [2024-11-20 13:23:59.234623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.814 [2024-11-20 13:23:59.234630] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.814 [2024-11-20 13:23:59.234639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.814 [2024-11-20 13:23:59.234645] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.814 [2024-11-20 13:23:59.234653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.814 [2024-11-20 13:23:59.255438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.814 BaseBdev1 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.814 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.814 [ 00:10:17.814 { 00:10:17.815 "name": "BaseBdev1", 00:10:17.815 "aliases": [ 00:10:17.815 "3e29e999-5c89-4bdb-b7ac-a23ce6796153" 00:10:17.815 ], 00:10:17.815 "product_name": "Malloc disk", 00:10:17.815 "block_size": 512, 00:10:17.815 "num_blocks": 65536, 00:10:17.815 "uuid": "3e29e999-5c89-4bdb-b7ac-a23ce6796153", 00:10:17.815 "assigned_rate_limits": { 00:10:17.815 "rw_ios_per_sec": 0, 00:10:17.815 "rw_mbytes_per_sec": 0, 00:10:17.815 "r_mbytes_per_sec": 0, 00:10:17.815 "w_mbytes_per_sec": 0 00:10:17.815 }, 00:10:17.815 "claimed": true, 00:10:17.815 "claim_type": "exclusive_write", 00:10:17.815 "zoned": false, 00:10:17.815 "supported_io_types": { 00:10:17.815 "read": true, 00:10:17.815 "write": true, 00:10:17.815 "unmap": true, 00:10:17.815 "flush": true, 00:10:17.815 "reset": true, 00:10:17.815 "nvme_admin": false, 00:10:17.815 "nvme_io": false, 00:10:17.815 "nvme_io_md": false, 00:10:17.815 "write_zeroes": true, 00:10:17.815 "zcopy": true, 00:10:17.815 "get_zone_info": false, 00:10:17.815 "zone_management": false, 00:10:17.815 "zone_append": false, 00:10:17.815 "compare": false, 00:10:17.815 "compare_and_write": false, 00:10:17.815 "abort": true, 00:10:17.815 "seek_hole": false, 00:10:17.815 "seek_data": false, 00:10:17.815 "copy": true, 00:10:17.815 "nvme_iov_md": false 00:10:17.815 }, 00:10:17.815 "memory_domains": [ 00:10:17.815 { 00:10:17.815 "dma_device_id": "system", 00:10:17.815 "dma_device_type": 1 00:10:17.815 }, 00:10:17.815 { 00:10:17.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.815 "dma_device_type": 2 00:10:17.815 } 00:10:17.815 ], 00:10:17.815 "driver_specific": {} 00:10:17.815 } 00:10:17.815 ] 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.815 "name": "Existed_Raid", 00:10:17.815 "uuid": "c119fd24-8f86-42ef-81f6-7ab2ac86401a", 00:10:17.815 "strip_size_kb": 64, 00:10:17.815 "state": "configuring", 00:10:17.815 "raid_level": "concat", 00:10:17.815 "superblock": true, 00:10:17.815 "num_base_bdevs": 4, 00:10:17.815 "num_base_bdevs_discovered": 1, 00:10:17.815 "num_base_bdevs_operational": 4, 00:10:17.815 "base_bdevs_list": [ 00:10:17.815 { 00:10:17.815 "name": "BaseBdev1", 00:10:17.815 "uuid": "3e29e999-5c89-4bdb-b7ac-a23ce6796153", 00:10:17.815 "is_configured": true, 00:10:17.815 "data_offset": 2048, 00:10:17.815 "data_size": 63488 00:10:17.815 }, 00:10:17.815 { 00:10:17.815 "name": "BaseBdev2", 00:10:17.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.815 "is_configured": false, 00:10:17.815 "data_offset": 0, 00:10:17.815 "data_size": 0 00:10:17.815 }, 00:10:17.815 { 00:10:17.815 "name": "BaseBdev3", 00:10:17.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.815 "is_configured": false, 00:10:17.815 "data_offset": 0, 00:10:17.815 "data_size": 0 00:10:17.815 }, 00:10:17.815 { 00:10:17.815 "name": "BaseBdev4", 00:10:17.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.815 "is_configured": false, 00:10:17.815 "data_offset": 0, 00:10:17.815 "data_size": 0 00:10:17.815 } 00:10:17.815 ] 00:10:17.815 }' 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.815 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.074 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:18.074 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.074 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.074 [2024-11-20 13:23:59.702722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.074 [2024-11-20 13:23:59.702776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:18.074 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.074 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.075 [2024-11-20 13:23:59.714750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.075 [2024-11-20 13:23:59.716691] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.075 [2024-11-20 13:23:59.716770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.075 [2024-11-20 13:23:59.716798] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.075 [2024-11-20 13:23:59.716819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.075 [2024-11-20 13:23:59.716837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:18.075 [2024-11-20 13:23:59.716857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.075 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.335 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.335 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.335 "name": "Existed_Raid", 00:10:18.335 "uuid": "6b000f8c-c8d0-454b-ad7c-e9a6953eb180", 00:10:18.335 "strip_size_kb": 64, 00:10:18.335 "state": "configuring", 00:10:18.335 "raid_level": "concat", 00:10:18.335 "superblock": true, 00:10:18.335 "num_base_bdevs": 4, 00:10:18.335 "num_base_bdevs_discovered": 1, 00:10:18.335 "num_base_bdevs_operational": 4, 00:10:18.335 "base_bdevs_list": [ 00:10:18.335 { 00:10:18.335 "name": "BaseBdev1", 00:10:18.335 "uuid": "3e29e999-5c89-4bdb-b7ac-a23ce6796153", 00:10:18.335 "is_configured": true, 00:10:18.335 "data_offset": 2048, 00:10:18.335 "data_size": 63488 00:10:18.335 }, 00:10:18.335 { 00:10:18.335 "name": "BaseBdev2", 00:10:18.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.335 "is_configured": false, 00:10:18.335 "data_offset": 0, 00:10:18.335 "data_size": 0 00:10:18.335 }, 00:10:18.335 { 00:10:18.335 "name": "BaseBdev3", 00:10:18.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.335 "is_configured": false, 00:10:18.335 "data_offset": 0, 00:10:18.335 "data_size": 0 00:10:18.335 }, 00:10:18.335 { 00:10:18.335 "name": "BaseBdev4", 00:10:18.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.335 "is_configured": false, 00:10:18.335 "data_offset": 0, 00:10:18.335 "data_size": 0 00:10:18.335 } 00:10:18.335 ] 00:10:18.335 }' 00:10:18.335 13:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.335 13:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.595 [2024-11-20 13:24:00.188908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.595 BaseBdev2 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.595 [ 00:10:18.595 { 00:10:18.595 "name": "BaseBdev2", 00:10:18.595 "aliases": [ 00:10:18.595 "266ac0bb-cd7e-4411-823b-6a83099b7246" 00:10:18.595 ], 00:10:18.595 "product_name": "Malloc disk", 00:10:18.595 "block_size": 512, 00:10:18.595 "num_blocks": 65536, 00:10:18.595 "uuid": "266ac0bb-cd7e-4411-823b-6a83099b7246", 00:10:18.595 "assigned_rate_limits": { 00:10:18.595 "rw_ios_per_sec": 0, 00:10:18.595 "rw_mbytes_per_sec": 0, 00:10:18.595 "r_mbytes_per_sec": 0, 00:10:18.595 "w_mbytes_per_sec": 0 00:10:18.595 }, 00:10:18.595 "claimed": true, 00:10:18.595 "claim_type": "exclusive_write", 00:10:18.595 "zoned": false, 00:10:18.595 "supported_io_types": { 00:10:18.595 "read": true, 00:10:18.595 "write": true, 00:10:18.595 "unmap": true, 00:10:18.595 "flush": true, 00:10:18.595 "reset": true, 00:10:18.595 "nvme_admin": false, 00:10:18.595 "nvme_io": false, 00:10:18.595 "nvme_io_md": false, 00:10:18.595 "write_zeroes": true, 00:10:18.595 "zcopy": true, 00:10:18.595 "get_zone_info": false, 00:10:18.595 "zone_management": false, 00:10:18.595 "zone_append": false, 00:10:18.595 "compare": false, 00:10:18.595 "compare_and_write": false, 00:10:18.595 "abort": true, 00:10:18.595 "seek_hole": false, 00:10:18.595 "seek_data": false, 00:10:18.595 "copy": true, 00:10:18.595 "nvme_iov_md": false 00:10:18.595 }, 00:10:18.595 "memory_domains": [ 00:10:18.595 { 00:10:18.595 "dma_device_id": "system", 00:10:18.595 "dma_device_type": 1 00:10:18.595 }, 00:10:18.595 { 00:10:18.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.595 "dma_device_type": 2 00:10:18.595 } 00:10:18.595 ], 00:10:18.595 "driver_specific": {} 00:10:18.595 } 00:10:18.595 ] 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.595 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.855 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.855 "name": "Existed_Raid", 00:10:18.855 "uuid": "6b000f8c-c8d0-454b-ad7c-e9a6953eb180", 00:10:18.855 "strip_size_kb": 64, 00:10:18.855 "state": "configuring", 00:10:18.855 "raid_level": "concat", 00:10:18.855 "superblock": true, 00:10:18.855 "num_base_bdevs": 4, 00:10:18.855 "num_base_bdevs_discovered": 2, 00:10:18.855 "num_base_bdevs_operational": 4, 00:10:18.855 "base_bdevs_list": [ 00:10:18.855 { 00:10:18.855 "name": "BaseBdev1", 00:10:18.855 "uuid": "3e29e999-5c89-4bdb-b7ac-a23ce6796153", 00:10:18.855 "is_configured": true, 00:10:18.855 "data_offset": 2048, 00:10:18.855 "data_size": 63488 00:10:18.855 }, 00:10:18.855 { 00:10:18.855 "name": "BaseBdev2", 00:10:18.855 "uuid": "266ac0bb-cd7e-4411-823b-6a83099b7246", 00:10:18.855 "is_configured": true, 00:10:18.855 "data_offset": 2048, 00:10:18.855 "data_size": 63488 00:10:18.855 }, 00:10:18.855 { 00:10:18.855 "name": "BaseBdev3", 00:10:18.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.855 "is_configured": false, 00:10:18.855 "data_offset": 0, 00:10:18.855 "data_size": 0 00:10:18.855 }, 00:10:18.855 { 00:10:18.855 "name": "BaseBdev4", 00:10:18.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.855 "is_configured": false, 00:10:18.855 "data_offset": 0, 00:10:18.855 "data_size": 0 00:10:18.855 } 00:10:18.855 ] 00:10:18.855 }' 00:10:18.855 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.855 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 [2024-11-20 13:24:00.689516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.115 BaseBdev3 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 [ 00:10:19.115 { 00:10:19.115 "name": "BaseBdev3", 00:10:19.115 "aliases": [ 00:10:19.115 "4b3aa186-69c7-4204-a8a4-0be59c7f04bf" 00:10:19.115 ], 00:10:19.115 "product_name": "Malloc disk", 00:10:19.115 "block_size": 512, 00:10:19.115 "num_blocks": 65536, 00:10:19.115 "uuid": "4b3aa186-69c7-4204-a8a4-0be59c7f04bf", 00:10:19.115 "assigned_rate_limits": { 00:10:19.115 "rw_ios_per_sec": 0, 00:10:19.115 "rw_mbytes_per_sec": 0, 00:10:19.115 "r_mbytes_per_sec": 0, 00:10:19.115 "w_mbytes_per_sec": 0 00:10:19.115 }, 00:10:19.115 "claimed": true, 00:10:19.115 "claim_type": "exclusive_write", 00:10:19.115 "zoned": false, 00:10:19.115 "supported_io_types": { 00:10:19.115 "read": true, 00:10:19.115 "write": true, 00:10:19.115 "unmap": true, 00:10:19.115 "flush": true, 00:10:19.115 "reset": true, 00:10:19.115 "nvme_admin": false, 00:10:19.115 "nvme_io": false, 00:10:19.115 "nvme_io_md": false, 00:10:19.115 "write_zeroes": true, 00:10:19.115 "zcopy": true, 00:10:19.115 "get_zone_info": false, 00:10:19.115 "zone_management": false, 00:10:19.115 "zone_append": false, 00:10:19.115 "compare": false, 00:10:19.115 "compare_and_write": false, 00:10:19.115 "abort": true, 00:10:19.115 "seek_hole": false, 00:10:19.115 "seek_data": false, 00:10:19.115 "copy": true, 00:10:19.115 "nvme_iov_md": false 00:10:19.115 }, 00:10:19.115 "memory_domains": [ 00:10:19.115 { 00:10:19.115 "dma_device_id": "system", 00:10:19.115 "dma_device_type": 1 00:10:19.115 }, 00:10:19.115 { 00:10:19.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.115 "dma_device_type": 2 00:10:19.115 } 00:10:19.115 ], 00:10:19.115 "driver_specific": {} 00:10:19.115 } 00:10:19.115 ] 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.115 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.115 "name": "Existed_Raid", 00:10:19.115 "uuid": "6b000f8c-c8d0-454b-ad7c-e9a6953eb180", 00:10:19.115 "strip_size_kb": 64, 00:10:19.115 "state": "configuring", 00:10:19.115 "raid_level": "concat", 00:10:19.115 "superblock": true, 00:10:19.115 "num_base_bdevs": 4, 00:10:19.115 "num_base_bdevs_discovered": 3, 00:10:19.115 "num_base_bdevs_operational": 4, 00:10:19.115 "base_bdevs_list": [ 00:10:19.115 { 00:10:19.115 "name": "BaseBdev1", 00:10:19.115 "uuid": "3e29e999-5c89-4bdb-b7ac-a23ce6796153", 00:10:19.115 "is_configured": true, 00:10:19.115 "data_offset": 2048, 00:10:19.115 "data_size": 63488 00:10:19.115 }, 00:10:19.115 { 00:10:19.115 "name": "BaseBdev2", 00:10:19.115 "uuid": "266ac0bb-cd7e-4411-823b-6a83099b7246", 00:10:19.115 "is_configured": true, 00:10:19.115 "data_offset": 2048, 00:10:19.115 "data_size": 63488 00:10:19.115 }, 00:10:19.115 { 00:10:19.115 "name": "BaseBdev3", 00:10:19.115 "uuid": "4b3aa186-69c7-4204-a8a4-0be59c7f04bf", 00:10:19.115 "is_configured": true, 00:10:19.115 "data_offset": 2048, 00:10:19.115 "data_size": 63488 00:10:19.115 }, 00:10:19.115 { 00:10:19.115 "name": "BaseBdev4", 00:10:19.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.115 "is_configured": false, 00:10:19.115 "data_offset": 0, 00:10:19.116 "data_size": 0 00:10:19.116 } 00:10:19.116 ] 00:10:19.116 }' 00:10:19.116 13:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.116 13:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.685 [2024-11-20 13:24:01.155809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:19.685 [2024-11-20 13:24:01.156014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:19.685 [2024-11-20 13:24:01.156029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:19.685 [2024-11-20 13:24:01.156325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:19.685 [2024-11-20 13:24:01.156460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:19.685 [2024-11-20 13:24:01.156479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:19.685 BaseBdev4 00:10:19.685 [2024-11-20 13:24:01.156594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.685 [ 00:10:19.685 { 00:10:19.685 "name": "BaseBdev4", 00:10:19.685 "aliases": [ 00:10:19.685 "7d844aef-7d7a-4a3c-a715-266d4552ac67" 00:10:19.685 ], 00:10:19.685 "product_name": "Malloc disk", 00:10:19.685 "block_size": 512, 00:10:19.685 "num_blocks": 65536, 00:10:19.685 "uuid": "7d844aef-7d7a-4a3c-a715-266d4552ac67", 00:10:19.685 "assigned_rate_limits": { 00:10:19.685 "rw_ios_per_sec": 0, 00:10:19.685 "rw_mbytes_per_sec": 0, 00:10:19.685 "r_mbytes_per_sec": 0, 00:10:19.685 "w_mbytes_per_sec": 0 00:10:19.685 }, 00:10:19.685 "claimed": true, 00:10:19.685 "claim_type": "exclusive_write", 00:10:19.685 "zoned": false, 00:10:19.685 "supported_io_types": { 00:10:19.685 "read": true, 00:10:19.685 "write": true, 00:10:19.685 "unmap": true, 00:10:19.685 "flush": true, 00:10:19.685 "reset": true, 00:10:19.685 "nvme_admin": false, 00:10:19.685 "nvme_io": false, 00:10:19.685 "nvme_io_md": false, 00:10:19.685 "write_zeroes": true, 00:10:19.685 "zcopy": true, 00:10:19.685 "get_zone_info": false, 00:10:19.685 "zone_management": false, 00:10:19.685 "zone_append": false, 00:10:19.685 "compare": false, 00:10:19.685 "compare_and_write": false, 00:10:19.685 "abort": true, 00:10:19.685 "seek_hole": false, 00:10:19.685 "seek_data": false, 00:10:19.685 "copy": true, 00:10:19.685 "nvme_iov_md": false 00:10:19.685 }, 00:10:19.685 "memory_domains": [ 00:10:19.685 { 00:10:19.685 "dma_device_id": "system", 00:10:19.685 "dma_device_type": 1 00:10:19.685 }, 00:10:19.685 { 00:10:19.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.685 "dma_device_type": 2 00:10:19.685 } 00:10:19.685 ], 00:10:19.685 "driver_specific": {} 00:10:19.685 } 00:10:19.685 ] 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.685 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.685 "name": "Existed_Raid", 00:10:19.685 "uuid": "6b000f8c-c8d0-454b-ad7c-e9a6953eb180", 00:10:19.685 "strip_size_kb": 64, 00:10:19.685 "state": "online", 00:10:19.685 "raid_level": "concat", 00:10:19.685 "superblock": true, 00:10:19.685 "num_base_bdevs": 4, 00:10:19.685 "num_base_bdevs_discovered": 4, 00:10:19.685 "num_base_bdevs_operational": 4, 00:10:19.685 "base_bdevs_list": [ 00:10:19.685 { 00:10:19.685 "name": "BaseBdev1", 00:10:19.685 "uuid": "3e29e999-5c89-4bdb-b7ac-a23ce6796153", 00:10:19.685 "is_configured": true, 00:10:19.685 "data_offset": 2048, 00:10:19.685 "data_size": 63488 00:10:19.685 }, 00:10:19.685 { 00:10:19.685 "name": "BaseBdev2", 00:10:19.685 "uuid": "266ac0bb-cd7e-4411-823b-6a83099b7246", 00:10:19.685 "is_configured": true, 00:10:19.686 "data_offset": 2048, 00:10:19.686 "data_size": 63488 00:10:19.686 }, 00:10:19.686 { 00:10:19.686 "name": "BaseBdev3", 00:10:19.686 "uuid": "4b3aa186-69c7-4204-a8a4-0be59c7f04bf", 00:10:19.686 "is_configured": true, 00:10:19.686 "data_offset": 2048, 00:10:19.686 "data_size": 63488 00:10:19.686 }, 00:10:19.686 { 00:10:19.686 "name": "BaseBdev4", 00:10:19.686 "uuid": "7d844aef-7d7a-4a3c-a715-266d4552ac67", 00:10:19.686 "is_configured": true, 00:10:19.686 "data_offset": 2048, 00:10:19.686 "data_size": 63488 00:10:19.686 } 00:10:19.686 ] 00:10:19.686 }' 00:10:19.686 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.686 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.946 [2024-11-20 13:24:01.571498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.946 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.946 "name": "Existed_Raid", 00:10:19.946 "aliases": [ 00:10:19.946 "6b000f8c-c8d0-454b-ad7c-e9a6953eb180" 00:10:19.946 ], 00:10:19.946 "product_name": "Raid Volume", 00:10:19.946 "block_size": 512, 00:10:19.946 "num_blocks": 253952, 00:10:19.946 "uuid": "6b000f8c-c8d0-454b-ad7c-e9a6953eb180", 00:10:19.946 "assigned_rate_limits": { 00:10:19.946 "rw_ios_per_sec": 0, 00:10:19.946 "rw_mbytes_per_sec": 0, 00:10:19.946 "r_mbytes_per_sec": 0, 00:10:19.946 "w_mbytes_per_sec": 0 00:10:19.946 }, 00:10:19.946 "claimed": false, 00:10:19.946 "zoned": false, 00:10:19.946 "supported_io_types": { 00:10:19.946 "read": true, 00:10:19.946 "write": true, 00:10:19.946 "unmap": true, 00:10:19.946 "flush": true, 00:10:19.946 "reset": true, 00:10:19.946 "nvme_admin": false, 00:10:19.946 "nvme_io": false, 00:10:19.946 "nvme_io_md": false, 00:10:19.946 "write_zeroes": true, 00:10:19.946 "zcopy": false, 00:10:19.946 "get_zone_info": false, 00:10:19.946 "zone_management": false, 00:10:19.946 "zone_append": false, 00:10:19.946 "compare": false, 00:10:19.946 "compare_and_write": false, 00:10:19.946 "abort": false, 00:10:19.946 "seek_hole": false, 00:10:19.946 "seek_data": false, 00:10:19.946 "copy": false, 00:10:19.946 "nvme_iov_md": false 00:10:19.946 }, 00:10:19.946 "memory_domains": [ 00:10:19.946 { 00:10:19.946 "dma_device_id": "system", 00:10:19.946 "dma_device_type": 1 00:10:19.946 }, 00:10:19.946 { 00:10:19.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.947 "dma_device_type": 2 00:10:19.947 }, 00:10:19.947 { 00:10:19.947 "dma_device_id": "system", 00:10:19.947 "dma_device_type": 1 00:10:19.947 }, 00:10:19.947 { 00:10:19.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.947 "dma_device_type": 2 00:10:19.947 }, 00:10:19.947 { 00:10:19.947 "dma_device_id": "system", 00:10:19.947 "dma_device_type": 1 00:10:19.947 }, 00:10:19.947 { 00:10:19.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.947 "dma_device_type": 2 00:10:19.947 }, 00:10:19.947 { 00:10:19.947 "dma_device_id": "system", 00:10:19.947 "dma_device_type": 1 00:10:19.947 }, 00:10:19.947 { 00:10:19.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.947 "dma_device_type": 2 00:10:19.947 } 00:10:19.947 ], 00:10:19.947 "driver_specific": { 00:10:19.947 "raid": { 00:10:19.947 "uuid": "6b000f8c-c8d0-454b-ad7c-e9a6953eb180", 00:10:19.947 "strip_size_kb": 64, 00:10:19.947 "state": "online", 00:10:19.947 "raid_level": "concat", 00:10:19.947 "superblock": true, 00:10:19.947 "num_base_bdevs": 4, 00:10:19.947 "num_base_bdevs_discovered": 4, 00:10:19.947 "num_base_bdevs_operational": 4, 00:10:19.947 "base_bdevs_list": [ 00:10:19.947 { 00:10:19.947 "name": "BaseBdev1", 00:10:19.947 "uuid": "3e29e999-5c89-4bdb-b7ac-a23ce6796153", 00:10:19.947 "is_configured": true, 00:10:19.947 "data_offset": 2048, 00:10:19.947 "data_size": 63488 00:10:19.947 }, 00:10:19.947 { 00:10:19.947 "name": "BaseBdev2", 00:10:19.947 "uuid": "266ac0bb-cd7e-4411-823b-6a83099b7246", 00:10:19.947 "is_configured": true, 00:10:19.947 "data_offset": 2048, 00:10:19.947 "data_size": 63488 00:10:19.947 }, 00:10:19.947 { 00:10:19.947 "name": "BaseBdev3", 00:10:19.947 "uuid": "4b3aa186-69c7-4204-a8a4-0be59c7f04bf", 00:10:19.947 "is_configured": true, 00:10:19.947 "data_offset": 2048, 00:10:19.947 "data_size": 63488 00:10:19.947 }, 00:10:19.947 { 00:10:19.947 "name": "BaseBdev4", 00:10:19.947 "uuid": "7d844aef-7d7a-4a3c-a715-266d4552ac67", 00:10:19.947 "is_configured": true, 00:10:19.947 "data_offset": 2048, 00:10:19.947 "data_size": 63488 00:10:19.947 } 00:10:19.947 ] 00:10:19.947 } 00:10:19.947 } 00:10:19.947 }' 00:10:19.947 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:20.207 BaseBdev2 00:10:20.207 BaseBdev3 00:10:20.207 BaseBdev4' 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.207 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.208 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.468 [2024-11-20 13:24:01.874692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.468 [2024-11-20 13:24:01.874724] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.468 [2024-11-20 13:24:01.874781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.468 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.469 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.469 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.469 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.469 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.469 "name": "Existed_Raid", 00:10:20.469 "uuid": "6b000f8c-c8d0-454b-ad7c-e9a6953eb180", 00:10:20.469 "strip_size_kb": 64, 00:10:20.469 "state": "offline", 00:10:20.469 "raid_level": "concat", 00:10:20.469 "superblock": true, 00:10:20.469 "num_base_bdevs": 4, 00:10:20.469 "num_base_bdevs_discovered": 3, 00:10:20.469 "num_base_bdevs_operational": 3, 00:10:20.469 "base_bdevs_list": [ 00:10:20.469 { 00:10:20.469 "name": null, 00:10:20.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.469 "is_configured": false, 00:10:20.469 "data_offset": 0, 00:10:20.469 "data_size": 63488 00:10:20.469 }, 00:10:20.469 { 00:10:20.469 "name": "BaseBdev2", 00:10:20.469 "uuid": "266ac0bb-cd7e-4411-823b-6a83099b7246", 00:10:20.469 "is_configured": true, 00:10:20.469 "data_offset": 2048, 00:10:20.469 "data_size": 63488 00:10:20.469 }, 00:10:20.469 { 00:10:20.469 "name": "BaseBdev3", 00:10:20.469 "uuid": "4b3aa186-69c7-4204-a8a4-0be59c7f04bf", 00:10:20.469 "is_configured": true, 00:10:20.469 "data_offset": 2048, 00:10:20.469 "data_size": 63488 00:10:20.469 }, 00:10:20.469 { 00:10:20.469 "name": "BaseBdev4", 00:10:20.469 "uuid": "7d844aef-7d7a-4a3c-a715-266d4552ac67", 00:10:20.469 "is_configured": true, 00:10:20.469 "data_offset": 2048, 00:10:20.469 "data_size": 63488 00:10:20.469 } 00:10:20.469 ] 00:10:20.469 }' 00:10:20.469 13:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.469 13:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.729 [2024-11-20 13:24:02.321269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.729 [2024-11-20 13:24:02.376600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.729 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.990 [2024-11-20 13:24:02.435797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:20.990 [2024-11-20 13:24:02.435896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.990 BaseBdev2 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.990 [ 00:10:20.990 { 00:10:20.990 "name": "BaseBdev2", 00:10:20.990 "aliases": [ 00:10:20.990 "fc38678b-1774-49cf-bbf5-f212646cb673" 00:10:20.990 ], 00:10:20.990 "product_name": "Malloc disk", 00:10:20.990 "block_size": 512, 00:10:20.990 "num_blocks": 65536, 00:10:20.990 "uuid": "fc38678b-1774-49cf-bbf5-f212646cb673", 00:10:20.990 "assigned_rate_limits": { 00:10:20.990 "rw_ios_per_sec": 0, 00:10:20.990 "rw_mbytes_per_sec": 0, 00:10:20.990 "r_mbytes_per_sec": 0, 00:10:20.990 "w_mbytes_per_sec": 0 00:10:20.990 }, 00:10:20.990 "claimed": false, 00:10:20.990 "zoned": false, 00:10:20.990 "supported_io_types": { 00:10:20.990 "read": true, 00:10:20.990 "write": true, 00:10:20.990 "unmap": true, 00:10:20.990 "flush": true, 00:10:20.990 "reset": true, 00:10:20.990 "nvme_admin": false, 00:10:20.990 "nvme_io": false, 00:10:20.990 "nvme_io_md": false, 00:10:20.990 "write_zeroes": true, 00:10:20.990 "zcopy": true, 00:10:20.990 "get_zone_info": false, 00:10:20.990 "zone_management": false, 00:10:20.990 "zone_append": false, 00:10:20.990 "compare": false, 00:10:20.990 "compare_and_write": false, 00:10:20.990 "abort": true, 00:10:20.990 "seek_hole": false, 00:10:20.990 "seek_data": false, 00:10:20.990 "copy": true, 00:10:20.990 "nvme_iov_md": false 00:10:20.990 }, 00:10:20.990 "memory_domains": [ 00:10:20.990 { 00:10:20.990 "dma_device_id": "system", 00:10:20.990 "dma_device_type": 1 00:10:20.990 }, 00:10:20.990 { 00:10:20.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.990 "dma_device_type": 2 00:10:20.990 } 00:10:20.990 ], 00:10:20.990 "driver_specific": {} 00:10:20.990 } 00:10:20.990 ] 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.990 BaseBdev3 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.990 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.991 [ 00:10:20.991 { 00:10:20.991 "name": "BaseBdev3", 00:10:20.991 "aliases": [ 00:10:20.991 "aecb0457-088b-4bdb-92a1-a05bca58b983" 00:10:20.991 ], 00:10:20.991 "product_name": "Malloc disk", 00:10:20.991 "block_size": 512, 00:10:20.991 "num_blocks": 65536, 00:10:20.991 "uuid": "aecb0457-088b-4bdb-92a1-a05bca58b983", 00:10:20.991 "assigned_rate_limits": { 00:10:20.991 "rw_ios_per_sec": 0, 00:10:20.991 "rw_mbytes_per_sec": 0, 00:10:20.991 "r_mbytes_per_sec": 0, 00:10:20.991 "w_mbytes_per_sec": 0 00:10:20.991 }, 00:10:20.991 "claimed": false, 00:10:20.991 "zoned": false, 00:10:20.991 "supported_io_types": { 00:10:20.991 "read": true, 00:10:20.991 "write": true, 00:10:20.991 "unmap": true, 00:10:20.991 "flush": true, 00:10:20.991 "reset": true, 00:10:20.991 "nvme_admin": false, 00:10:20.991 "nvme_io": false, 00:10:20.991 "nvme_io_md": false, 00:10:20.991 "write_zeroes": true, 00:10:20.991 "zcopy": true, 00:10:20.991 "get_zone_info": false, 00:10:20.991 "zone_management": false, 00:10:20.991 "zone_append": false, 00:10:20.991 "compare": false, 00:10:20.991 "compare_and_write": false, 00:10:20.991 "abort": true, 00:10:20.991 "seek_hole": false, 00:10:20.991 "seek_data": false, 00:10:20.991 "copy": true, 00:10:20.991 "nvme_iov_md": false 00:10:20.991 }, 00:10:20.991 "memory_domains": [ 00:10:20.991 { 00:10:20.991 "dma_device_id": "system", 00:10:20.991 "dma_device_type": 1 00:10:20.991 }, 00:10:20.991 { 00:10:20.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.991 "dma_device_type": 2 00:10:20.991 } 00:10:20.991 ], 00:10:20.991 "driver_specific": {} 00:10:20.991 } 00:10:20.991 ] 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.991 BaseBdev4 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.991 [ 00:10:20.991 { 00:10:20.991 "name": "BaseBdev4", 00:10:20.991 "aliases": [ 00:10:20.991 "8dec8791-3279-4c17-b1d0-f749865fafc2" 00:10:20.991 ], 00:10:20.991 "product_name": "Malloc disk", 00:10:20.991 "block_size": 512, 00:10:20.991 "num_blocks": 65536, 00:10:20.991 "uuid": "8dec8791-3279-4c17-b1d0-f749865fafc2", 00:10:20.991 "assigned_rate_limits": { 00:10:20.991 "rw_ios_per_sec": 0, 00:10:20.991 "rw_mbytes_per_sec": 0, 00:10:20.991 "r_mbytes_per_sec": 0, 00:10:20.991 "w_mbytes_per_sec": 0 00:10:20.991 }, 00:10:20.991 "claimed": false, 00:10:20.991 "zoned": false, 00:10:20.991 "supported_io_types": { 00:10:20.991 "read": true, 00:10:20.991 "write": true, 00:10:20.991 "unmap": true, 00:10:20.991 "flush": true, 00:10:20.991 "reset": true, 00:10:20.991 "nvme_admin": false, 00:10:20.991 "nvme_io": false, 00:10:20.991 "nvme_io_md": false, 00:10:20.991 "write_zeroes": true, 00:10:20.991 "zcopy": true, 00:10:20.991 "get_zone_info": false, 00:10:20.991 "zone_management": false, 00:10:20.991 "zone_append": false, 00:10:20.991 "compare": false, 00:10:20.991 "compare_and_write": false, 00:10:20.991 "abort": true, 00:10:20.991 "seek_hole": false, 00:10:20.991 "seek_data": false, 00:10:20.991 "copy": true, 00:10:20.991 "nvme_iov_md": false 00:10:20.991 }, 00:10:20.991 "memory_domains": [ 00:10:20.991 { 00:10:20.991 "dma_device_id": "system", 00:10:20.991 "dma_device_type": 1 00:10:20.991 }, 00:10:20.991 { 00:10:20.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.991 "dma_device_type": 2 00:10:20.991 } 00:10:20.991 ], 00:10:20.991 "driver_specific": {} 00:10:20.991 } 00:10:20.991 ] 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.991 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.252 [2024-11-20 13:24:02.660611] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.252 [2024-11-20 13:24:02.660700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.252 [2024-11-20 13:24:02.660764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.252 [2024-11-20 13:24:02.662605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.252 [2024-11-20 13:24:02.662694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.252 "name": "Existed_Raid", 00:10:21.252 "uuid": "9c5b2841-674e-4ef2-9207-168aafa23fbf", 00:10:21.252 "strip_size_kb": 64, 00:10:21.252 "state": "configuring", 00:10:21.252 "raid_level": "concat", 00:10:21.252 "superblock": true, 00:10:21.252 "num_base_bdevs": 4, 00:10:21.252 "num_base_bdevs_discovered": 3, 00:10:21.252 "num_base_bdevs_operational": 4, 00:10:21.252 "base_bdevs_list": [ 00:10:21.252 { 00:10:21.252 "name": "BaseBdev1", 00:10:21.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.252 "is_configured": false, 00:10:21.252 "data_offset": 0, 00:10:21.252 "data_size": 0 00:10:21.252 }, 00:10:21.252 { 00:10:21.252 "name": "BaseBdev2", 00:10:21.252 "uuid": "fc38678b-1774-49cf-bbf5-f212646cb673", 00:10:21.252 "is_configured": true, 00:10:21.252 "data_offset": 2048, 00:10:21.252 "data_size": 63488 00:10:21.252 }, 00:10:21.252 { 00:10:21.252 "name": "BaseBdev3", 00:10:21.252 "uuid": "aecb0457-088b-4bdb-92a1-a05bca58b983", 00:10:21.252 "is_configured": true, 00:10:21.252 "data_offset": 2048, 00:10:21.252 "data_size": 63488 00:10:21.252 }, 00:10:21.252 { 00:10:21.252 "name": "BaseBdev4", 00:10:21.252 "uuid": "8dec8791-3279-4c17-b1d0-f749865fafc2", 00:10:21.252 "is_configured": true, 00:10:21.252 "data_offset": 2048, 00:10:21.252 "data_size": 63488 00:10:21.252 } 00:10:21.252 ] 00:10:21.252 }' 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.252 13:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 [2024-11-20 13:24:03.143788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.774 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.774 "name": "Existed_Raid", 00:10:21.774 "uuid": "9c5b2841-674e-4ef2-9207-168aafa23fbf", 00:10:21.774 "strip_size_kb": 64, 00:10:21.774 "state": "configuring", 00:10:21.774 "raid_level": "concat", 00:10:21.774 "superblock": true, 00:10:21.774 "num_base_bdevs": 4, 00:10:21.774 "num_base_bdevs_discovered": 2, 00:10:21.774 "num_base_bdevs_operational": 4, 00:10:21.774 "base_bdevs_list": [ 00:10:21.774 { 00:10:21.774 "name": "BaseBdev1", 00:10:21.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.774 "is_configured": false, 00:10:21.774 "data_offset": 0, 00:10:21.774 "data_size": 0 00:10:21.774 }, 00:10:21.774 { 00:10:21.774 "name": null, 00:10:21.774 "uuid": "fc38678b-1774-49cf-bbf5-f212646cb673", 00:10:21.774 "is_configured": false, 00:10:21.774 "data_offset": 0, 00:10:21.774 "data_size": 63488 00:10:21.774 }, 00:10:21.774 { 00:10:21.774 "name": "BaseBdev3", 00:10:21.774 "uuid": "aecb0457-088b-4bdb-92a1-a05bca58b983", 00:10:21.774 "is_configured": true, 00:10:21.774 "data_offset": 2048, 00:10:21.774 "data_size": 63488 00:10:21.774 }, 00:10:21.774 { 00:10:21.774 "name": "BaseBdev4", 00:10:21.774 "uuid": "8dec8791-3279-4c17-b1d0-f749865fafc2", 00:10:21.774 "is_configured": true, 00:10:21.774 "data_offset": 2048, 00:10:21.774 "data_size": 63488 00:10:21.774 } 00:10:21.774 ] 00:10:21.774 }' 00:10:21.774 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.774 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 [2024-11-20 13:24:03.634022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.034 BaseBdev1 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.034 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 [ 00:10:22.034 { 00:10:22.034 "name": "BaseBdev1", 00:10:22.034 "aliases": [ 00:10:22.034 "17eab2bd-71a0-4af7-9c80-c6498885d621" 00:10:22.034 ], 00:10:22.034 "product_name": "Malloc disk", 00:10:22.034 "block_size": 512, 00:10:22.034 "num_blocks": 65536, 00:10:22.034 "uuid": "17eab2bd-71a0-4af7-9c80-c6498885d621", 00:10:22.034 "assigned_rate_limits": { 00:10:22.034 "rw_ios_per_sec": 0, 00:10:22.034 "rw_mbytes_per_sec": 0, 00:10:22.034 "r_mbytes_per_sec": 0, 00:10:22.034 "w_mbytes_per_sec": 0 00:10:22.034 }, 00:10:22.034 "claimed": true, 00:10:22.034 "claim_type": "exclusive_write", 00:10:22.034 "zoned": false, 00:10:22.035 "supported_io_types": { 00:10:22.035 "read": true, 00:10:22.035 "write": true, 00:10:22.035 "unmap": true, 00:10:22.035 "flush": true, 00:10:22.035 "reset": true, 00:10:22.035 "nvme_admin": false, 00:10:22.035 "nvme_io": false, 00:10:22.035 "nvme_io_md": false, 00:10:22.035 "write_zeroes": true, 00:10:22.035 "zcopy": true, 00:10:22.035 "get_zone_info": false, 00:10:22.035 "zone_management": false, 00:10:22.035 "zone_append": false, 00:10:22.035 "compare": false, 00:10:22.035 "compare_and_write": false, 00:10:22.035 "abort": true, 00:10:22.035 "seek_hole": false, 00:10:22.035 "seek_data": false, 00:10:22.035 "copy": true, 00:10:22.035 "nvme_iov_md": false 00:10:22.035 }, 00:10:22.035 "memory_domains": [ 00:10:22.035 { 00:10:22.035 "dma_device_id": "system", 00:10:22.035 "dma_device_type": 1 00:10:22.035 }, 00:10:22.035 { 00:10:22.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.035 "dma_device_type": 2 00:10:22.035 } 00:10:22.035 ], 00:10:22.035 "driver_specific": {} 00:10:22.035 } 00:10:22.035 ] 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.035 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.294 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.294 "name": "Existed_Raid", 00:10:22.294 "uuid": "9c5b2841-674e-4ef2-9207-168aafa23fbf", 00:10:22.294 "strip_size_kb": 64, 00:10:22.294 "state": "configuring", 00:10:22.294 "raid_level": "concat", 00:10:22.294 "superblock": true, 00:10:22.294 "num_base_bdevs": 4, 00:10:22.294 "num_base_bdevs_discovered": 3, 00:10:22.294 "num_base_bdevs_operational": 4, 00:10:22.294 "base_bdevs_list": [ 00:10:22.294 { 00:10:22.294 "name": "BaseBdev1", 00:10:22.295 "uuid": "17eab2bd-71a0-4af7-9c80-c6498885d621", 00:10:22.295 "is_configured": true, 00:10:22.295 "data_offset": 2048, 00:10:22.295 "data_size": 63488 00:10:22.295 }, 00:10:22.295 { 00:10:22.295 "name": null, 00:10:22.295 "uuid": "fc38678b-1774-49cf-bbf5-f212646cb673", 00:10:22.295 "is_configured": false, 00:10:22.295 "data_offset": 0, 00:10:22.295 "data_size": 63488 00:10:22.295 }, 00:10:22.295 { 00:10:22.295 "name": "BaseBdev3", 00:10:22.295 "uuid": "aecb0457-088b-4bdb-92a1-a05bca58b983", 00:10:22.295 "is_configured": true, 00:10:22.295 "data_offset": 2048, 00:10:22.295 "data_size": 63488 00:10:22.295 }, 00:10:22.295 { 00:10:22.295 "name": "BaseBdev4", 00:10:22.295 "uuid": "8dec8791-3279-4c17-b1d0-f749865fafc2", 00:10:22.295 "is_configured": true, 00:10:22.295 "data_offset": 2048, 00:10:22.295 "data_size": 63488 00:10:22.295 } 00:10:22.295 ] 00:10:22.295 }' 00:10:22.295 13:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.295 13:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.555 [2024-11-20 13:24:04.157210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.555 "name": "Existed_Raid", 00:10:22.555 "uuid": "9c5b2841-674e-4ef2-9207-168aafa23fbf", 00:10:22.555 "strip_size_kb": 64, 00:10:22.555 "state": "configuring", 00:10:22.555 "raid_level": "concat", 00:10:22.555 "superblock": true, 00:10:22.555 "num_base_bdevs": 4, 00:10:22.555 "num_base_bdevs_discovered": 2, 00:10:22.555 "num_base_bdevs_operational": 4, 00:10:22.555 "base_bdevs_list": [ 00:10:22.555 { 00:10:22.555 "name": "BaseBdev1", 00:10:22.555 "uuid": "17eab2bd-71a0-4af7-9c80-c6498885d621", 00:10:22.555 "is_configured": true, 00:10:22.555 "data_offset": 2048, 00:10:22.555 "data_size": 63488 00:10:22.555 }, 00:10:22.555 { 00:10:22.555 "name": null, 00:10:22.555 "uuid": "fc38678b-1774-49cf-bbf5-f212646cb673", 00:10:22.555 "is_configured": false, 00:10:22.555 "data_offset": 0, 00:10:22.555 "data_size": 63488 00:10:22.555 }, 00:10:22.555 { 00:10:22.555 "name": null, 00:10:22.555 "uuid": "aecb0457-088b-4bdb-92a1-a05bca58b983", 00:10:22.555 "is_configured": false, 00:10:22.555 "data_offset": 0, 00:10:22.555 "data_size": 63488 00:10:22.555 }, 00:10:22.555 { 00:10:22.555 "name": "BaseBdev4", 00:10:22.555 "uuid": "8dec8791-3279-4c17-b1d0-f749865fafc2", 00:10:22.555 "is_configured": true, 00:10:22.555 "data_offset": 2048, 00:10:22.555 "data_size": 63488 00:10:22.555 } 00:10:22.555 ] 00:10:22.555 }' 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.555 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.125 [2024-11-20 13:24:04.632389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.125 "name": "Existed_Raid", 00:10:23.125 "uuid": "9c5b2841-674e-4ef2-9207-168aafa23fbf", 00:10:23.125 "strip_size_kb": 64, 00:10:23.125 "state": "configuring", 00:10:23.125 "raid_level": "concat", 00:10:23.125 "superblock": true, 00:10:23.125 "num_base_bdevs": 4, 00:10:23.125 "num_base_bdevs_discovered": 3, 00:10:23.125 "num_base_bdevs_operational": 4, 00:10:23.125 "base_bdevs_list": [ 00:10:23.125 { 00:10:23.125 "name": "BaseBdev1", 00:10:23.125 "uuid": "17eab2bd-71a0-4af7-9c80-c6498885d621", 00:10:23.125 "is_configured": true, 00:10:23.125 "data_offset": 2048, 00:10:23.125 "data_size": 63488 00:10:23.125 }, 00:10:23.125 { 00:10:23.125 "name": null, 00:10:23.125 "uuid": "fc38678b-1774-49cf-bbf5-f212646cb673", 00:10:23.125 "is_configured": false, 00:10:23.125 "data_offset": 0, 00:10:23.125 "data_size": 63488 00:10:23.125 }, 00:10:23.125 { 00:10:23.125 "name": "BaseBdev3", 00:10:23.125 "uuid": "aecb0457-088b-4bdb-92a1-a05bca58b983", 00:10:23.125 "is_configured": true, 00:10:23.125 "data_offset": 2048, 00:10:23.125 "data_size": 63488 00:10:23.125 }, 00:10:23.125 { 00:10:23.125 "name": "BaseBdev4", 00:10:23.125 "uuid": "8dec8791-3279-4c17-b1d0-f749865fafc2", 00:10:23.125 "is_configured": true, 00:10:23.125 "data_offset": 2048, 00:10:23.125 "data_size": 63488 00:10:23.125 } 00:10:23.125 ] 00:10:23.125 }' 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.125 13:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.695 [2024-11-20 13:24:05.115598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.695 "name": "Existed_Raid", 00:10:23.695 "uuid": "9c5b2841-674e-4ef2-9207-168aafa23fbf", 00:10:23.695 "strip_size_kb": 64, 00:10:23.695 "state": "configuring", 00:10:23.695 "raid_level": "concat", 00:10:23.695 "superblock": true, 00:10:23.695 "num_base_bdevs": 4, 00:10:23.695 "num_base_bdevs_discovered": 2, 00:10:23.695 "num_base_bdevs_operational": 4, 00:10:23.695 "base_bdevs_list": [ 00:10:23.695 { 00:10:23.695 "name": null, 00:10:23.695 "uuid": "17eab2bd-71a0-4af7-9c80-c6498885d621", 00:10:23.695 "is_configured": false, 00:10:23.695 "data_offset": 0, 00:10:23.695 "data_size": 63488 00:10:23.695 }, 00:10:23.695 { 00:10:23.695 "name": null, 00:10:23.695 "uuid": "fc38678b-1774-49cf-bbf5-f212646cb673", 00:10:23.695 "is_configured": false, 00:10:23.695 "data_offset": 0, 00:10:23.695 "data_size": 63488 00:10:23.695 }, 00:10:23.695 { 00:10:23.695 "name": "BaseBdev3", 00:10:23.695 "uuid": "aecb0457-088b-4bdb-92a1-a05bca58b983", 00:10:23.695 "is_configured": true, 00:10:23.695 "data_offset": 2048, 00:10:23.695 "data_size": 63488 00:10:23.695 }, 00:10:23.695 { 00:10:23.695 "name": "BaseBdev4", 00:10:23.695 "uuid": "8dec8791-3279-4c17-b1d0-f749865fafc2", 00:10:23.695 "is_configured": true, 00:10:23.695 "data_offset": 2048, 00:10:23.695 "data_size": 63488 00:10:23.695 } 00:10:23.695 ] 00:10:23.695 }' 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.695 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.955 [2024-11-20 13:24:05.581162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.955 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.215 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.215 "name": "Existed_Raid", 00:10:24.215 "uuid": "9c5b2841-674e-4ef2-9207-168aafa23fbf", 00:10:24.215 "strip_size_kb": 64, 00:10:24.215 "state": "configuring", 00:10:24.215 "raid_level": "concat", 00:10:24.215 "superblock": true, 00:10:24.215 "num_base_bdevs": 4, 00:10:24.215 "num_base_bdevs_discovered": 3, 00:10:24.215 "num_base_bdevs_operational": 4, 00:10:24.215 "base_bdevs_list": [ 00:10:24.215 { 00:10:24.215 "name": null, 00:10:24.215 "uuid": "17eab2bd-71a0-4af7-9c80-c6498885d621", 00:10:24.215 "is_configured": false, 00:10:24.215 "data_offset": 0, 00:10:24.215 "data_size": 63488 00:10:24.215 }, 00:10:24.215 { 00:10:24.215 "name": "BaseBdev2", 00:10:24.215 "uuid": "fc38678b-1774-49cf-bbf5-f212646cb673", 00:10:24.215 "is_configured": true, 00:10:24.215 "data_offset": 2048, 00:10:24.215 "data_size": 63488 00:10:24.215 }, 00:10:24.215 { 00:10:24.215 "name": "BaseBdev3", 00:10:24.215 "uuid": "aecb0457-088b-4bdb-92a1-a05bca58b983", 00:10:24.215 "is_configured": true, 00:10:24.215 "data_offset": 2048, 00:10:24.215 "data_size": 63488 00:10:24.215 }, 00:10:24.215 { 00:10:24.215 "name": "BaseBdev4", 00:10:24.215 "uuid": "8dec8791-3279-4c17-b1d0-f749865fafc2", 00:10:24.215 "is_configured": true, 00:10:24.215 "data_offset": 2048, 00:10:24.215 "data_size": 63488 00:10:24.215 } 00:10:24.215 ] 00:10:24.215 }' 00:10:24.215 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.215 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.475 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.475 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.475 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.475 13:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:24.475 13:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 17eab2bd-71a0-4af7-9c80-c6498885d621 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.475 [2024-11-20 13:24:06.079431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:24.475 [2024-11-20 13:24:06.079706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:24.475 [2024-11-20 13:24:06.079776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.475 [2024-11-20 13:24:06.080093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:24.475 [2024-11-20 13:24:06.080236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:24.475 [2024-11-20 13:24:06.080277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:24.475 NewBaseBdev 00:10:24.475 [2024-11-20 13:24:06.080446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.475 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.475 [ 00:10:24.475 { 00:10:24.475 "name": "NewBaseBdev", 00:10:24.475 "aliases": [ 00:10:24.475 "17eab2bd-71a0-4af7-9c80-c6498885d621" 00:10:24.475 ], 00:10:24.475 "product_name": "Malloc disk", 00:10:24.475 "block_size": 512, 00:10:24.475 "num_blocks": 65536, 00:10:24.475 "uuid": "17eab2bd-71a0-4af7-9c80-c6498885d621", 00:10:24.475 "assigned_rate_limits": { 00:10:24.475 "rw_ios_per_sec": 0, 00:10:24.475 "rw_mbytes_per_sec": 0, 00:10:24.475 "r_mbytes_per_sec": 0, 00:10:24.475 "w_mbytes_per_sec": 0 00:10:24.475 }, 00:10:24.475 "claimed": true, 00:10:24.475 "claim_type": "exclusive_write", 00:10:24.475 "zoned": false, 00:10:24.475 "supported_io_types": { 00:10:24.475 "read": true, 00:10:24.475 "write": true, 00:10:24.475 "unmap": true, 00:10:24.475 "flush": true, 00:10:24.475 "reset": true, 00:10:24.475 "nvme_admin": false, 00:10:24.475 "nvme_io": false, 00:10:24.475 "nvme_io_md": false, 00:10:24.475 "write_zeroes": true, 00:10:24.475 "zcopy": true, 00:10:24.475 "get_zone_info": false, 00:10:24.475 "zone_management": false, 00:10:24.475 "zone_append": false, 00:10:24.475 "compare": false, 00:10:24.475 "compare_and_write": false, 00:10:24.475 "abort": true, 00:10:24.475 "seek_hole": false, 00:10:24.475 "seek_data": false, 00:10:24.475 "copy": true, 00:10:24.475 "nvme_iov_md": false 00:10:24.475 }, 00:10:24.476 "memory_domains": [ 00:10:24.476 { 00:10:24.476 "dma_device_id": "system", 00:10:24.476 "dma_device_type": 1 00:10:24.476 }, 00:10:24.476 { 00:10:24.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.476 "dma_device_type": 2 00:10:24.476 } 00:10:24.476 ], 00:10:24.476 "driver_specific": {} 00:10:24.476 } 00:10:24.476 ] 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.476 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.736 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.736 "name": "Existed_Raid", 00:10:24.736 "uuid": "9c5b2841-674e-4ef2-9207-168aafa23fbf", 00:10:24.736 "strip_size_kb": 64, 00:10:24.736 "state": "online", 00:10:24.736 "raid_level": "concat", 00:10:24.736 "superblock": true, 00:10:24.736 "num_base_bdevs": 4, 00:10:24.736 "num_base_bdevs_discovered": 4, 00:10:24.736 "num_base_bdevs_operational": 4, 00:10:24.736 "base_bdevs_list": [ 00:10:24.736 { 00:10:24.736 "name": "NewBaseBdev", 00:10:24.736 "uuid": "17eab2bd-71a0-4af7-9c80-c6498885d621", 00:10:24.736 "is_configured": true, 00:10:24.736 "data_offset": 2048, 00:10:24.736 "data_size": 63488 00:10:24.736 }, 00:10:24.736 { 00:10:24.736 "name": "BaseBdev2", 00:10:24.736 "uuid": "fc38678b-1774-49cf-bbf5-f212646cb673", 00:10:24.736 "is_configured": true, 00:10:24.736 "data_offset": 2048, 00:10:24.736 "data_size": 63488 00:10:24.736 }, 00:10:24.736 { 00:10:24.736 "name": "BaseBdev3", 00:10:24.736 "uuid": "aecb0457-088b-4bdb-92a1-a05bca58b983", 00:10:24.736 "is_configured": true, 00:10:24.736 "data_offset": 2048, 00:10:24.736 "data_size": 63488 00:10:24.736 }, 00:10:24.736 { 00:10:24.736 "name": "BaseBdev4", 00:10:24.736 "uuid": "8dec8791-3279-4c17-b1d0-f749865fafc2", 00:10:24.736 "is_configured": true, 00:10:24.736 "data_offset": 2048, 00:10:24.736 "data_size": 63488 00:10:24.736 } 00:10:24.736 ] 00:10:24.736 }' 00:10:24.736 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.736 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.995 [2024-11-20 13:24:06.586971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.995 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.995 "name": "Existed_Raid", 00:10:24.995 "aliases": [ 00:10:24.995 "9c5b2841-674e-4ef2-9207-168aafa23fbf" 00:10:24.995 ], 00:10:24.995 "product_name": "Raid Volume", 00:10:24.995 "block_size": 512, 00:10:24.995 "num_blocks": 253952, 00:10:24.995 "uuid": "9c5b2841-674e-4ef2-9207-168aafa23fbf", 00:10:24.995 "assigned_rate_limits": { 00:10:24.995 "rw_ios_per_sec": 0, 00:10:24.995 "rw_mbytes_per_sec": 0, 00:10:24.995 "r_mbytes_per_sec": 0, 00:10:24.995 "w_mbytes_per_sec": 0 00:10:24.995 }, 00:10:24.995 "claimed": false, 00:10:24.995 "zoned": false, 00:10:24.995 "supported_io_types": { 00:10:24.995 "read": true, 00:10:24.995 "write": true, 00:10:24.995 "unmap": true, 00:10:24.995 "flush": true, 00:10:24.995 "reset": true, 00:10:24.995 "nvme_admin": false, 00:10:24.995 "nvme_io": false, 00:10:24.995 "nvme_io_md": false, 00:10:24.995 "write_zeroes": true, 00:10:24.995 "zcopy": false, 00:10:24.995 "get_zone_info": false, 00:10:24.995 "zone_management": false, 00:10:24.995 "zone_append": false, 00:10:24.995 "compare": false, 00:10:24.995 "compare_and_write": false, 00:10:24.995 "abort": false, 00:10:24.995 "seek_hole": false, 00:10:24.995 "seek_data": false, 00:10:24.995 "copy": false, 00:10:24.995 "nvme_iov_md": false 00:10:24.995 }, 00:10:24.995 "memory_domains": [ 00:10:24.995 { 00:10:24.995 "dma_device_id": "system", 00:10:24.995 "dma_device_type": 1 00:10:24.995 }, 00:10:24.995 { 00:10:24.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.996 "dma_device_type": 2 00:10:24.996 }, 00:10:24.996 { 00:10:24.996 "dma_device_id": "system", 00:10:24.996 "dma_device_type": 1 00:10:24.996 }, 00:10:24.996 { 00:10:24.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.996 "dma_device_type": 2 00:10:24.996 }, 00:10:24.996 { 00:10:24.996 "dma_device_id": "system", 00:10:24.996 "dma_device_type": 1 00:10:24.996 }, 00:10:24.996 { 00:10:24.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.996 "dma_device_type": 2 00:10:24.996 }, 00:10:24.996 { 00:10:24.996 "dma_device_id": "system", 00:10:24.996 "dma_device_type": 1 00:10:24.996 }, 00:10:24.996 { 00:10:24.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.996 "dma_device_type": 2 00:10:24.996 } 00:10:24.996 ], 00:10:24.996 "driver_specific": { 00:10:24.996 "raid": { 00:10:24.996 "uuid": "9c5b2841-674e-4ef2-9207-168aafa23fbf", 00:10:24.996 "strip_size_kb": 64, 00:10:24.996 "state": "online", 00:10:24.996 "raid_level": "concat", 00:10:24.996 "superblock": true, 00:10:24.996 "num_base_bdevs": 4, 00:10:24.996 "num_base_bdevs_discovered": 4, 00:10:24.996 "num_base_bdevs_operational": 4, 00:10:24.996 "base_bdevs_list": [ 00:10:24.996 { 00:10:24.996 "name": "NewBaseBdev", 00:10:24.996 "uuid": "17eab2bd-71a0-4af7-9c80-c6498885d621", 00:10:24.996 "is_configured": true, 00:10:24.996 "data_offset": 2048, 00:10:24.996 "data_size": 63488 00:10:24.996 }, 00:10:24.996 { 00:10:24.996 "name": "BaseBdev2", 00:10:24.996 "uuid": "fc38678b-1774-49cf-bbf5-f212646cb673", 00:10:24.996 "is_configured": true, 00:10:24.996 "data_offset": 2048, 00:10:24.996 "data_size": 63488 00:10:24.996 }, 00:10:24.996 { 00:10:24.996 "name": "BaseBdev3", 00:10:24.996 "uuid": "aecb0457-088b-4bdb-92a1-a05bca58b983", 00:10:24.996 "is_configured": true, 00:10:24.996 "data_offset": 2048, 00:10:24.996 "data_size": 63488 00:10:24.996 }, 00:10:24.996 { 00:10:24.996 "name": "BaseBdev4", 00:10:24.996 "uuid": "8dec8791-3279-4c17-b1d0-f749865fafc2", 00:10:24.996 "is_configured": true, 00:10:24.996 "data_offset": 2048, 00:10:24.996 "data_size": 63488 00:10:24.996 } 00:10:24.996 ] 00:10:24.996 } 00:10:24.996 } 00:10:24.996 }' 00:10:24.996 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:25.301 BaseBdev2 00:10:25.301 BaseBdev3 00:10:25.301 BaseBdev4' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.301 [2024-11-20 13:24:06.902095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.301 [2024-11-20 13:24:06.902125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.301 [2024-11-20 13:24:06.902204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.301 [2024-11-20 13:24:06.902276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.301 [2024-11-20 13:24:06.902287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82515 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82515 ']' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82515 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.301 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82515 00:10:25.564 killing process with pid 82515 00:10:25.564 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.564 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.564 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82515' 00:10:25.564 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82515 00:10:25.564 [2024-11-20 13:24:06.936414] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.564 13:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82515 00:10:25.564 [2024-11-20 13:24:06.976649] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.564 13:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:25.564 ************************************ 00:10:25.564 END TEST raid_state_function_test_sb 00:10:25.564 ************************************ 00:10:25.564 00:10:25.564 real 0m9.294s 00:10:25.564 user 0m15.948s 00:10:25.564 sys 0m1.852s 00:10:25.564 13:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.564 13:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.823 13:24:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:25.823 13:24:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:25.823 13:24:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.823 13:24:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.823 ************************************ 00:10:25.823 START TEST raid_superblock_test 00:10:25.823 ************************************ 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83163 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83163 00:10:25.823 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83163 ']' 00:10:25.824 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.824 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.824 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.824 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.824 13:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.824 [2024-11-20 13:24:07.337210] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:25.824 [2024-11-20 13:24:07.337520] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83163 ] 00:10:26.083 [2024-11-20 13:24:07.500171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.083 [2024-11-20 13:24:07.525854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.083 [2024-11-20 13:24:07.568814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.083 [2024-11-20 13:24:07.568935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.653 malloc1 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.653 [2024-11-20 13:24:08.187655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:26.653 [2024-11-20 13:24:08.187716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.653 [2024-11-20 13:24:08.187737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:26.653 [2024-11-20 13:24:08.187752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.653 [2024-11-20 13:24:08.190084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.653 [2024-11-20 13:24:08.190128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:26.653 pt1 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.653 malloc2 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.653 [2024-11-20 13:24:08.220502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.653 [2024-11-20 13:24:08.220627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.653 [2024-11-20 13:24:08.220667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:26.653 [2024-11-20 13:24:08.220703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.653 [2024-11-20 13:24:08.223154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.653 [2024-11-20 13:24:08.223233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.653 pt2 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.653 malloc3 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.653 [2024-11-20 13:24:08.253432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:26.653 [2024-11-20 13:24:08.253551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.653 [2024-11-20 13:24:08.253609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:26.653 [2024-11-20 13:24:08.253642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.653 [2024-11-20 13:24:08.255863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.653 [2024-11-20 13:24:08.255939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:26.653 pt3 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.653 malloc4 00:10:26.653 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.654 [2024-11-20 13:24:08.290686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:26.654 [2024-11-20 13:24:08.290819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.654 [2024-11-20 13:24:08.290871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:26.654 [2024-11-20 13:24:08.290904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.654 [2024-11-20 13:24:08.293085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.654 [2024-11-20 13:24:08.293165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:26.654 pt4 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.654 [2024-11-20 13:24:08.302722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:26.654 [2024-11-20 13:24:08.304732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.654 [2024-11-20 13:24:08.304875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:26.654 [2024-11-20 13:24:08.304944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:26.654 [2024-11-20 13:24:08.305166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:26.654 [2024-11-20 13:24:08.305215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:26.654 [2024-11-20 13:24:08.305523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:26.654 [2024-11-20 13:24:08.305728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:26.654 [2024-11-20 13:24:08.305769] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:26.654 [2024-11-20 13:24:08.305964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.654 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.914 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.914 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.914 "name": "raid_bdev1", 00:10:26.914 "uuid": "c84302bc-2ed1-4441-92ac-2dde5e4e8282", 00:10:26.914 "strip_size_kb": 64, 00:10:26.914 "state": "online", 00:10:26.914 "raid_level": "concat", 00:10:26.914 "superblock": true, 00:10:26.914 "num_base_bdevs": 4, 00:10:26.914 "num_base_bdevs_discovered": 4, 00:10:26.914 "num_base_bdevs_operational": 4, 00:10:26.914 "base_bdevs_list": [ 00:10:26.914 { 00:10:26.914 "name": "pt1", 00:10:26.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.914 "is_configured": true, 00:10:26.914 "data_offset": 2048, 00:10:26.914 "data_size": 63488 00:10:26.914 }, 00:10:26.914 { 00:10:26.914 "name": "pt2", 00:10:26.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.914 "is_configured": true, 00:10:26.914 "data_offset": 2048, 00:10:26.914 "data_size": 63488 00:10:26.914 }, 00:10:26.914 { 00:10:26.914 "name": "pt3", 00:10:26.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.914 "is_configured": true, 00:10:26.914 "data_offset": 2048, 00:10:26.914 "data_size": 63488 00:10:26.914 }, 00:10:26.914 { 00:10:26.914 "name": "pt4", 00:10:26.914 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:26.914 "is_configured": true, 00:10:26.914 "data_offset": 2048, 00:10:26.914 "data_size": 63488 00:10:26.914 } 00:10:26.914 ] 00:10:26.914 }' 00:10:26.914 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.914 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.174 [2024-11-20 13:24:08.786190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.174 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.174 "name": "raid_bdev1", 00:10:27.174 "aliases": [ 00:10:27.174 "c84302bc-2ed1-4441-92ac-2dde5e4e8282" 00:10:27.174 ], 00:10:27.174 "product_name": "Raid Volume", 00:10:27.174 "block_size": 512, 00:10:27.174 "num_blocks": 253952, 00:10:27.174 "uuid": "c84302bc-2ed1-4441-92ac-2dde5e4e8282", 00:10:27.174 "assigned_rate_limits": { 00:10:27.174 "rw_ios_per_sec": 0, 00:10:27.174 "rw_mbytes_per_sec": 0, 00:10:27.174 "r_mbytes_per_sec": 0, 00:10:27.174 "w_mbytes_per_sec": 0 00:10:27.174 }, 00:10:27.174 "claimed": false, 00:10:27.174 "zoned": false, 00:10:27.174 "supported_io_types": { 00:10:27.174 "read": true, 00:10:27.174 "write": true, 00:10:27.174 "unmap": true, 00:10:27.174 "flush": true, 00:10:27.174 "reset": true, 00:10:27.174 "nvme_admin": false, 00:10:27.174 "nvme_io": false, 00:10:27.174 "nvme_io_md": false, 00:10:27.174 "write_zeroes": true, 00:10:27.174 "zcopy": false, 00:10:27.174 "get_zone_info": false, 00:10:27.174 "zone_management": false, 00:10:27.174 "zone_append": false, 00:10:27.174 "compare": false, 00:10:27.174 "compare_and_write": false, 00:10:27.174 "abort": false, 00:10:27.174 "seek_hole": false, 00:10:27.174 "seek_data": false, 00:10:27.174 "copy": false, 00:10:27.174 "nvme_iov_md": false 00:10:27.174 }, 00:10:27.174 "memory_domains": [ 00:10:27.174 { 00:10:27.174 "dma_device_id": "system", 00:10:27.174 "dma_device_type": 1 00:10:27.174 }, 00:10:27.174 { 00:10:27.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.174 "dma_device_type": 2 00:10:27.175 }, 00:10:27.175 { 00:10:27.175 "dma_device_id": "system", 00:10:27.175 "dma_device_type": 1 00:10:27.175 }, 00:10:27.175 { 00:10:27.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.175 "dma_device_type": 2 00:10:27.175 }, 00:10:27.175 { 00:10:27.175 "dma_device_id": "system", 00:10:27.175 "dma_device_type": 1 00:10:27.175 }, 00:10:27.175 { 00:10:27.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.175 "dma_device_type": 2 00:10:27.175 }, 00:10:27.175 { 00:10:27.175 "dma_device_id": "system", 00:10:27.175 "dma_device_type": 1 00:10:27.175 }, 00:10:27.175 { 00:10:27.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.175 "dma_device_type": 2 00:10:27.175 } 00:10:27.175 ], 00:10:27.175 "driver_specific": { 00:10:27.175 "raid": { 00:10:27.175 "uuid": "c84302bc-2ed1-4441-92ac-2dde5e4e8282", 00:10:27.175 "strip_size_kb": 64, 00:10:27.175 "state": "online", 00:10:27.175 "raid_level": "concat", 00:10:27.175 "superblock": true, 00:10:27.175 "num_base_bdevs": 4, 00:10:27.175 "num_base_bdevs_discovered": 4, 00:10:27.175 "num_base_bdevs_operational": 4, 00:10:27.175 "base_bdevs_list": [ 00:10:27.175 { 00:10:27.175 "name": "pt1", 00:10:27.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.175 "is_configured": true, 00:10:27.175 "data_offset": 2048, 00:10:27.175 "data_size": 63488 00:10:27.175 }, 00:10:27.175 { 00:10:27.175 "name": "pt2", 00:10:27.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.175 "is_configured": true, 00:10:27.175 "data_offset": 2048, 00:10:27.175 "data_size": 63488 00:10:27.175 }, 00:10:27.175 { 00:10:27.175 "name": "pt3", 00:10:27.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.175 "is_configured": true, 00:10:27.175 "data_offset": 2048, 00:10:27.175 "data_size": 63488 00:10:27.175 }, 00:10:27.175 { 00:10:27.175 "name": "pt4", 00:10:27.175 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:27.175 "is_configured": true, 00:10:27.175 "data_offset": 2048, 00:10:27.175 "data_size": 63488 00:10:27.175 } 00:10:27.175 ] 00:10:27.175 } 00:10:27.175 } 00:10:27.175 }' 00:10:27.175 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:27.435 pt2 00:10:27.435 pt3 00:10:27.435 pt4' 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.435 13:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.435 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.435 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.435 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.435 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:27.435 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.435 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.435 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.436 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.436 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.436 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.436 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.436 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.436 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:27.436 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.436 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.436 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.697 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.697 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.697 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.698 [2024-11-20 13:24:09.121604] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c84302bc-2ed1-4441-92ac-2dde5e4e8282 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c84302bc-2ed1-4441-92ac-2dde5e4e8282 ']' 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.698 [2024-11-20 13:24:09.157210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.698 [2024-11-20 13:24:09.157251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.698 [2024-11-20 13:24:09.157331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.698 [2024-11-20 13:24:09.157419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.698 [2024-11-20 13:24:09.157439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.698 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:27.699 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:27.700 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.700 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.700 [2024-11-20 13:24:09.320947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:27.700 [2024-11-20 13:24:09.322830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:27.700 [2024-11-20 13:24:09.322871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:27.700 [2024-11-20 13:24:09.322899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:27.700 [2024-11-20 13:24:09.322943] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:27.700 [2024-11-20 13:24:09.323010] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:27.700 [2024-11-20 13:24:09.323031] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:27.700 [2024-11-20 13:24:09.323047] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:27.700 [2024-11-20 13:24:09.323060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.700 [2024-11-20 13:24:09.323069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:27.700 request: 00:10:27.700 { 00:10:27.700 "name": "raid_bdev1", 00:10:27.700 "raid_level": "concat", 00:10:27.700 "base_bdevs": [ 00:10:27.700 "malloc1", 00:10:27.700 "malloc2", 00:10:27.700 "malloc3", 00:10:27.700 "malloc4" 00:10:27.700 ], 00:10:27.700 "strip_size_kb": 64, 00:10:27.700 "superblock": false, 00:10:27.700 "method": "bdev_raid_create", 00:10:27.700 "req_id": 1 00:10:27.700 } 00:10:27.700 Got JSON-RPC error response 00:10:27.700 response: 00:10:27.700 { 00:10:27.700 "code": -17, 00:10:27.700 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:27.700 } 00:10:27.700 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:27.700 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:27.700 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:27.700 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:27.700 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:27.700 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:27.701 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.701 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.701 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.701 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.963 [2024-11-20 13:24:09.380813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:27.963 [2024-11-20 13:24:09.380909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.963 [2024-11-20 13:24:09.380950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:27.963 [2024-11-20 13:24:09.380978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.963 [2024-11-20 13:24:09.383174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.963 [2024-11-20 13:24:09.383244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:27.963 [2024-11-20 13:24:09.383338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:27.963 [2024-11-20 13:24:09.383390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:27.963 pt1 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.963 "name": "raid_bdev1", 00:10:27.963 "uuid": "c84302bc-2ed1-4441-92ac-2dde5e4e8282", 00:10:27.963 "strip_size_kb": 64, 00:10:27.963 "state": "configuring", 00:10:27.963 "raid_level": "concat", 00:10:27.963 "superblock": true, 00:10:27.963 "num_base_bdevs": 4, 00:10:27.963 "num_base_bdevs_discovered": 1, 00:10:27.963 "num_base_bdevs_operational": 4, 00:10:27.963 "base_bdevs_list": [ 00:10:27.963 { 00:10:27.963 "name": "pt1", 00:10:27.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.963 "is_configured": true, 00:10:27.963 "data_offset": 2048, 00:10:27.963 "data_size": 63488 00:10:27.963 }, 00:10:27.963 { 00:10:27.963 "name": null, 00:10:27.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.963 "is_configured": false, 00:10:27.963 "data_offset": 2048, 00:10:27.963 "data_size": 63488 00:10:27.963 }, 00:10:27.963 { 00:10:27.963 "name": null, 00:10:27.963 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.963 "is_configured": false, 00:10:27.963 "data_offset": 2048, 00:10:27.963 "data_size": 63488 00:10:27.963 }, 00:10:27.963 { 00:10:27.963 "name": null, 00:10:27.963 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:27.963 "is_configured": false, 00:10:27.963 "data_offset": 2048, 00:10:27.963 "data_size": 63488 00:10:27.963 } 00:10:27.963 ] 00:10:27.963 }' 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.963 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.222 [2024-11-20 13:24:09.832080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:28.222 [2024-11-20 13:24:09.832203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.222 [2024-11-20 13:24:09.832244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:28.222 [2024-11-20 13:24:09.832277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.222 [2024-11-20 13:24:09.832700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.222 [2024-11-20 13:24:09.832757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:28.222 [2024-11-20 13:24:09.832860] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:28.222 [2024-11-20 13:24:09.832910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.222 pt2 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.222 [2024-11-20 13:24:09.840073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.222 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.482 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.482 "name": "raid_bdev1", 00:10:28.482 "uuid": "c84302bc-2ed1-4441-92ac-2dde5e4e8282", 00:10:28.482 "strip_size_kb": 64, 00:10:28.482 "state": "configuring", 00:10:28.482 "raid_level": "concat", 00:10:28.482 "superblock": true, 00:10:28.482 "num_base_bdevs": 4, 00:10:28.482 "num_base_bdevs_discovered": 1, 00:10:28.482 "num_base_bdevs_operational": 4, 00:10:28.482 "base_bdevs_list": [ 00:10:28.482 { 00:10:28.482 "name": "pt1", 00:10:28.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.482 "is_configured": true, 00:10:28.482 "data_offset": 2048, 00:10:28.482 "data_size": 63488 00:10:28.482 }, 00:10:28.482 { 00:10:28.482 "name": null, 00:10:28.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.482 "is_configured": false, 00:10:28.482 "data_offset": 0, 00:10:28.482 "data_size": 63488 00:10:28.482 }, 00:10:28.482 { 00:10:28.482 "name": null, 00:10:28.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.482 "is_configured": false, 00:10:28.482 "data_offset": 2048, 00:10:28.482 "data_size": 63488 00:10:28.482 }, 00:10:28.482 { 00:10:28.482 "name": null, 00:10:28.482 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:28.482 "is_configured": false, 00:10:28.482 "data_offset": 2048, 00:10:28.482 "data_size": 63488 00:10:28.482 } 00:10:28.482 ] 00:10:28.482 }' 00:10:28.482 13:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.482 13:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.742 [2024-11-20 13:24:10.287372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:28.742 [2024-11-20 13:24:10.287519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.742 [2024-11-20 13:24:10.287546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:28.742 [2024-11-20 13:24:10.287559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.742 [2024-11-20 13:24:10.288011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.742 [2024-11-20 13:24:10.288037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:28.742 [2024-11-20 13:24:10.288119] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:28.742 [2024-11-20 13:24:10.288146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.742 pt2 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.742 [2024-11-20 13:24:10.299292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:28.742 [2024-11-20 13:24:10.299351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.742 [2024-11-20 13:24:10.299369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:28.742 [2024-11-20 13:24:10.299380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.742 [2024-11-20 13:24:10.299783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.742 [2024-11-20 13:24:10.299804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:28.742 [2024-11-20 13:24:10.299867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:28.742 [2024-11-20 13:24:10.299890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:28.742 pt3 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.742 [2024-11-20 13:24:10.311273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:28.742 [2024-11-20 13:24:10.311326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.742 [2024-11-20 13:24:10.311342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:28.742 [2024-11-20 13:24:10.311352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.742 [2024-11-20 13:24:10.311658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.742 [2024-11-20 13:24:10.311678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:28.742 [2024-11-20 13:24:10.311733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:28.742 [2024-11-20 13:24:10.311753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:28.742 [2024-11-20 13:24:10.311850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:28.742 [2024-11-20 13:24:10.311860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:28.742 [2024-11-20 13:24:10.312112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:28.742 [2024-11-20 13:24:10.312230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:28.742 [2024-11-20 13:24:10.312239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:28.742 [2024-11-20 13:24:10.312334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.742 pt4 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.742 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.742 "name": "raid_bdev1", 00:10:28.742 "uuid": "c84302bc-2ed1-4441-92ac-2dde5e4e8282", 00:10:28.742 "strip_size_kb": 64, 00:10:28.742 "state": "online", 00:10:28.742 "raid_level": "concat", 00:10:28.742 "superblock": true, 00:10:28.742 "num_base_bdevs": 4, 00:10:28.742 "num_base_bdevs_discovered": 4, 00:10:28.742 "num_base_bdevs_operational": 4, 00:10:28.742 "base_bdevs_list": [ 00:10:28.742 { 00:10:28.742 "name": "pt1", 00:10:28.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.742 "is_configured": true, 00:10:28.742 "data_offset": 2048, 00:10:28.742 "data_size": 63488 00:10:28.742 }, 00:10:28.742 { 00:10:28.742 "name": "pt2", 00:10:28.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.742 "is_configured": true, 00:10:28.742 "data_offset": 2048, 00:10:28.742 "data_size": 63488 00:10:28.743 }, 00:10:28.743 { 00:10:28.743 "name": "pt3", 00:10:28.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.743 "is_configured": true, 00:10:28.743 "data_offset": 2048, 00:10:28.743 "data_size": 63488 00:10:28.743 }, 00:10:28.743 { 00:10:28.743 "name": "pt4", 00:10:28.743 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:28.743 "is_configured": true, 00:10:28.743 "data_offset": 2048, 00:10:28.743 "data_size": 63488 00:10:28.743 } 00:10:28.743 ] 00:10:28.743 }' 00:10:28.743 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.743 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 [2024-11-20 13:24:10.714949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.313 "name": "raid_bdev1", 00:10:29.313 "aliases": [ 00:10:29.313 "c84302bc-2ed1-4441-92ac-2dde5e4e8282" 00:10:29.313 ], 00:10:29.313 "product_name": "Raid Volume", 00:10:29.313 "block_size": 512, 00:10:29.313 "num_blocks": 253952, 00:10:29.313 "uuid": "c84302bc-2ed1-4441-92ac-2dde5e4e8282", 00:10:29.313 "assigned_rate_limits": { 00:10:29.313 "rw_ios_per_sec": 0, 00:10:29.313 "rw_mbytes_per_sec": 0, 00:10:29.313 "r_mbytes_per_sec": 0, 00:10:29.313 "w_mbytes_per_sec": 0 00:10:29.313 }, 00:10:29.313 "claimed": false, 00:10:29.313 "zoned": false, 00:10:29.313 "supported_io_types": { 00:10:29.313 "read": true, 00:10:29.313 "write": true, 00:10:29.313 "unmap": true, 00:10:29.313 "flush": true, 00:10:29.313 "reset": true, 00:10:29.313 "nvme_admin": false, 00:10:29.313 "nvme_io": false, 00:10:29.313 "nvme_io_md": false, 00:10:29.313 "write_zeroes": true, 00:10:29.313 "zcopy": false, 00:10:29.313 "get_zone_info": false, 00:10:29.313 "zone_management": false, 00:10:29.313 "zone_append": false, 00:10:29.313 "compare": false, 00:10:29.313 "compare_and_write": false, 00:10:29.313 "abort": false, 00:10:29.313 "seek_hole": false, 00:10:29.313 "seek_data": false, 00:10:29.313 "copy": false, 00:10:29.313 "nvme_iov_md": false 00:10:29.313 }, 00:10:29.313 "memory_domains": [ 00:10:29.313 { 00:10:29.313 "dma_device_id": "system", 00:10:29.313 "dma_device_type": 1 00:10:29.313 }, 00:10:29.313 { 00:10:29.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.313 "dma_device_type": 2 00:10:29.313 }, 00:10:29.313 { 00:10:29.313 "dma_device_id": "system", 00:10:29.313 "dma_device_type": 1 00:10:29.313 }, 00:10:29.313 { 00:10:29.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.313 "dma_device_type": 2 00:10:29.313 }, 00:10:29.313 { 00:10:29.313 "dma_device_id": "system", 00:10:29.313 "dma_device_type": 1 00:10:29.313 }, 00:10:29.313 { 00:10:29.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.313 "dma_device_type": 2 00:10:29.313 }, 00:10:29.313 { 00:10:29.313 "dma_device_id": "system", 00:10:29.313 "dma_device_type": 1 00:10:29.313 }, 00:10:29.313 { 00:10:29.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.313 "dma_device_type": 2 00:10:29.313 } 00:10:29.313 ], 00:10:29.313 "driver_specific": { 00:10:29.313 "raid": { 00:10:29.313 "uuid": "c84302bc-2ed1-4441-92ac-2dde5e4e8282", 00:10:29.313 "strip_size_kb": 64, 00:10:29.313 "state": "online", 00:10:29.313 "raid_level": "concat", 00:10:29.313 "superblock": true, 00:10:29.313 "num_base_bdevs": 4, 00:10:29.313 "num_base_bdevs_discovered": 4, 00:10:29.313 "num_base_bdevs_operational": 4, 00:10:29.313 "base_bdevs_list": [ 00:10:29.313 { 00:10:29.313 "name": "pt1", 00:10:29.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.313 "is_configured": true, 00:10:29.313 "data_offset": 2048, 00:10:29.313 "data_size": 63488 00:10:29.313 }, 00:10:29.313 { 00:10:29.313 "name": "pt2", 00:10:29.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.313 "is_configured": true, 00:10:29.313 "data_offset": 2048, 00:10:29.313 "data_size": 63488 00:10:29.313 }, 00:10:29.313 { 00:10:29.313 "name": "pt3", 00:10:29.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.313 "is_configured": true, 00:10:29.313 "data_offset": 2048, 00:10:29.313 "data_size": 63488 00:10:29.313 }, 00:10:29.313 { 00:10:29.313 "name": "pt4", 00:10:29.313 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:29.313 "is_configured": true, 00:10:29.313 "data_offset": 2048, 00:10:29.313 "data_size": 63488 00:10:29.313 } 00:10:29.313 ] 00:10:29.313 } 00:10:29.313 } 00:10:29.313 }' 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:29.313 pt2 00:10:29.313 pt3 00:10:29.313 pt4' 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.313 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.314 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.314 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.314 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.314 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.314 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.314 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:29.314 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.314 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.314 13:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.314 13:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:29.576 [2024-11-20 13:24:11.022407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c84302bc-2ed1-4441-92ac-2dde5e4e8282 '!=' c84302bc-2ed1-4441-92ac-2dde5e4e8282 ']' 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83163 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83163 ']' 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83163 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83163 00:10:29.576 killing process with pid 83163 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83163' 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 83163 00:10:29.576 [2024-11-20 13:24:11.105667] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.576 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 83163 00:10:29.576 [2024-11-20 13:24:11.105777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.576 [2024-11-20 13:24:11.105851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.576 [2024-11-20 13:24:11.105862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:29.576 [2024-11-20 13:24:11.150244] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.837 13:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:29.837 00:10:29.837 real 0m4.112s 00:10:29.837 user 0m6.473s 00:10:29.837 sys 0m0.935s 00:10:29.837 ************************************ 00:10:29.837 END TEST raid_superblock_test 00:10:29.837 ************************************ 00:10:29.837 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.837 13:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.837 13:24:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:29.837 13:24:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.837 13:24:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.837 13:24:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.837 ************************************ 00:10:29.837 START TEST raid_read_error_test 00:10:29.837 ************************************ 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CIa7rUWZ91 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83411 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:29.837 13:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83411 00:10:29.838 13:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 83411 ']' 00:10:29.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.838 13:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.838 13:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.838 13:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.838 13:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.838 13:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.097 [2024-11-20 13:24:11.538601] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:30.097 [2024-11-20 13:24:11.538810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83411 ] 00:10:30.097 [2024-11-20 13:24:11.693437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.097 [2024-11-20 13:24:11.719263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.097 [2024-11-20 13:24:11.762276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.097 [2024-11-20 13:24:11.762310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 BaseBdev1_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 true 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 [2024-11-20 13:24:12.400605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:31.036 [2024-11-20 13:24:12.400713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.036 [2024-11-20 13:24:12.400756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:31.036 [2024-11-20 13:24:12.400807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.036 [2024-11-20 13:24:12.403002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.036 [2024-11-20 13:24:12.403078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:31.036 BaseBdev1 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 BaseBdev2_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 true 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 [2024-11-20 13:24:12.441173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:31.036 [2024-11-20 13:24:12.441259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.036 [2024-11-20 13:24:12.441292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:31.036 [2024-11-20 13:24:12.441331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.036 [2024-11-20 13:24:12.443428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.036 [2024-11-20 13:24:12.443512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:31.036 BaseBdev2 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 BaseBdev3_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 true 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 [2024-11-20 13:24:12.481956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:31.036 [2024-11-20 13:24:12.482015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.036 [2024-11-20 13:24:12.482035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:31.036 [2024-11-20 13:24:12.482044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.036 [2024-11-20 13:24:12.484357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.036 [2024-11-20 13:24:12.484448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:31.036 BaseBdev3 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 BaseBdev4_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 true 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.036 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.036 [2024-11-20 13:24:12.530240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:31.036 [2024-11-20 13:24:12.530336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.036 [2024-11-20 13:24:12.530364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:31.036 [2024-11-20 13:24:12.530373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.036 [2024-11-20 13:24:12.532459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.037 [2024-11-20 13:24:12.532496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:31.037 BaseBdev4 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.037 [2024-11-20 13:24:12.542266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.037 [2024-11-20 13:24:12.544175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.037 [2024-11-20 13:24:12.544306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.037 [2024-11-20 13:24:12.544395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:31.037 [2024-11-20 13:24:12.544643] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:31.037 [2024-11-20 13:24:12.544689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:31.037 [2024-11-20 13:24:12.544959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:31.037 [2024-11-20 13:24:12.545139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:31.037 [2024-11-20 13:24:12.545184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:31.037 [2024-11-20 13:24:12.545347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.037 "name": "raid_bdev1", 00:10:31.037 "uuid": "08bcf3dd-6655-4917-8a62-6f724bdb699b", 00:10:31.037 "strip_size_kb": 64, 00:10:31.037 "state": "online", 00:10:31.037 "raid_level": "concat", 00:10:31.037 "superblock": true, 00:10:31.037 "num_base_bdevs": 4, 00:10:31.037 "num_base_bdevs_discovered": 4, 00:10:31.037 "num_base_bdevs_operational": 4, 00:10:31.037 "base_bdevs_list": [ 00:10:31.037 { 00:10:31.037 "name": "BaseBdev1", 00:10:31.037 "uuid": "40654c56-a548-59d0-abe1-33c2041682f5", 00:10:31.037 "is_configured": true, 00:10:31.037 "data_offset": 2048, 00:10:31.037 "data_size": 63488 00:10:31.037 }, 00:10:31.037 { 00:10:31.037 "name": "BaseBdev2", 00:10:31.037 "uuid": "5e446ae8-0c13-58f9-9879-11dc6faabcc2", 00:10:31.037 "is_configured": true, 00:10:31.037 "data_offset": 2048, 00:10:31.037 "data_size": 63488 00:10:31.037 }, 00:10:31.037 { 00:10:31.037 "name": "BaseBdev3", 00:10:31.037 "uuid": "d7cbea8b-9e05-5152-be4a-030e323bd2b7", 00:10:31.037 "is_configured": true, 00:10:31.037 "data_offset": 2048, 00:10:31.037 "data_size": 63488 00:10:31.037 }, 00:10:31.037 { 00:10:31.037 "name": "BaseBdev4", 00:10:31.037 "uuid": "a479193a-a91d-54a2-acbe-0cddb5978d32", 00:10:31.037 "is_configured": true, 00:10:31.037 "data_offset": 2048, 00:10:31.037 "data_size": 63488 00:10:31.037 } 00:10:31.037 ] 00:10:31.037 }' 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.037 13:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.607 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:31.607 13:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:31.607 [2024-11-20 13:24:13.085733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:32.547 13:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:32.547 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.547 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.547 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.547 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:32.547 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:32.547 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:32.547 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:32.547 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.547 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.547 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.548 "name": "raid_bdev1", 00:10:32.548 "uuid": "08bcf3dd-6655-4917-8a62-6f724bdb699b", 00:10:32.548 "strip_size_kb": 64, 00:10:32.548 "state": "online", 00:10:32.548 "raid_level": "concat", 00:10:32.548 "superblock": true, 00:10:32.548 "num_base_bdevs": 4, 00:10:32.548 "num_base_bdevs_discovered": 4, 00:10:32.548 "num_base_bdevs_operational": 4, 00:10:32.548 "base_bdevs_list": [ 00:10:32.548 { 00:10:32.548 "name": "BaseBdev1", 00:10:32.548 "uuid": "40654c56-a548-59d0-abe1-33c2041682f5", 00:10:32.548 "is_configured": true, 00:10:32.548 "data_offset": 2048, 00:10:32.548 "data_size": 63488 00:10:32.548 }, 00:10:32.548 { 00:10:32.548 "name": "BaseBdev2", 00:10:32.548 "uuid": "5e446ae8-0c13-58f9-9879-11dc6faabcc2", 00:10:32.548 "is_configured": true, 00:10:32.548 "data_offset": 2048, 00:10:32.548 "data_size": 63488 00:10:32.548 }, 00:10:32.548 { 00:10:32.548 "name": "BaseBdev3", 00:10:32.548 "uuid": "d7cbea8b-9e05-5152-be4a-030e323bd2b7", 00:10:32.548 "is_configured": true, 00:10:32.548 "data_offset": 2048, 00:10:32.548 "data_size": 63488 00:10:32.548 }, 00:10:32.548 { 00:10:32.548 "name": "BaseBdev4", 00:10:32.548 "uuid": "a479193a-a91d-54a2-acbe-0cddb5978d32", 00:10:32.548 "is_configured": true, 00:10:32.548 "data_offset": 2048, 00:10:32.548 "data_size": 63488 00:10:32.548 } 00:10:32.548 ] 00:10:32.548 }' 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.548 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.815 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.815 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.815 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.815 [2024-11-20 13:24:14.453880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.815 [2024-11-20 13:24:14.453980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.815 [2024-11-20 13:24:14.456710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.815 [2024-11-20 13:24:14.456815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.815 [2024-11-20 13:24:14.456895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.815 [2024-11-20 13:24:14.456950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:32.815 { 00:10:32.815 "results": [ 00:10:32.815 { 00:10:32.815 "job": "raid_bdev1", 00:10:32.815 "core_mask": "0x1", 00:10:32.815 "workload": "randrw", 00:10:32.815 "percentage": 50, 00:10:32.815 "status": "finished", 00:10:32.815 "queue_depth": 1, 00:10:32.815 "io_size": 131072, 00:10:32.815 "runtime": 1.368996, 00:10:32.815 "iops": 16125.686269353599, 00:10:32.815 "mibps": 2015.7107836691998, 00:10:32.815 "io_failed": 1, 00:10:32.815 "io_timeout": 0, 00:10:32.815 "avg_latency_us": 85.88051988742063, 00:10:32.815 "min_latency_us": 26.1589519650655, 00:10:32.815 "max_latency_us": 1366.5257641921398 00:10:32.815 } 00:10:32.815 ], 00:10:32.815 "core_count": 1 00:10:32.815 } 00:10:32.815 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.815 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83411 00:10:32.815 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 83411 ']' 00:10:32.815 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 83411 00:10:32.815 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:32.815 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.815 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83411 00:10:33.086 killing process with pid 83411 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83411' 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 83411 00:10:33.086 [2024-11-20 13:24:14.489848] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 83411 00:10:33.086 [2024-11-20 13:24:14.525021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CIa7rUWZ91 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:33.086 ************************************ 00:10:33.086 END TEST raid_read_error_test 00:10:33.086 ************************************ 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:33.086 00:10:33.086 real 0m3.305s 00:10:33.086 user 0m4.173s 00:10:33.086 sys 0m0.540s 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.086 13:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.345 13:24:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:33.346 13:24:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:33.346 13:24:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.346 13:24:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.346 ************************************ 00:10:33.346 START TEST raid_write_error_test 00:10:33.346 ************************************ 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BjWJuZAzB8 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83540 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83540 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 83540 ']' 00:10:33.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.346 13:24:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.346 [2024-11-20 13:24:14.911710] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:33.346 [2024-11-20 13:24:14.911840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83540 ] 00:10:33.606 [2024-11-20 13:24:15.064395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.606 [2024-11-20 13:24:15.090624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.606 [2024-11-20 13:24:15.133853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.606 [2024-11-20 13:24:15.133895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.175 BaseBdev1_malloc 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.175 true 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.175 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.175 [2024-11-20 13:24:15.784893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:34.175 [2024-11-20 13:24:15.784951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.175 [2024-11-20 13:24:15.784975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:34.175 [2024-11-20 13:24:15.784998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.176 [2024-11-20 13:24:15.787205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.176 [2024-11-20 13:24:15.787284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:34.176 BaseBdev1 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.176 BaseBdev2_malloc 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.176 true 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.176 [2024-11-20 13:24:15.825703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:34.176 [2024-11-20 13:24:15.825753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.176 [2024-11-20 13:24:15.825771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:34.176 [2024-11-20 13:24:15.825788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.176 [2024-11-20 13:24:15.827967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.176 [2024-11-20 13:24:15.828020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:34.176 BaseBdev2 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.176 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.435 BaseBdev3_malloc 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.436 true 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.436 [2024-11-20 13:24:15.866385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:34.436 [2024-11-20 13:24:15.866431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.436 [2024-11-20 13:24:15.866465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:34.436 [2024-11-20 13:24:15.866473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.436 [2024-11-20 13:24:15.868563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.436 [2024-11-20 13:24:15.868602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:34.436 BaseBdev3 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.436 BaseBdev4_malloc 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.436 true 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.436 [2024-11-20 13:24:15.916436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:34.436 [2024-11-20 13:24:15.916559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.436 [2024-11-20 13:24:15.916590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:34.436 [2024-11-20 13:24:15.916599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.436 [2024-11-20 13:24:15.918901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.436 [2024-11-20 13:24:15.918940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:34.436 BaseBdev4 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.436 [2024-11-20 13:24:15.928480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.436 [2024-11-20 13:24:15.930357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.436 [2024-11-20 13:24:15.930434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.436 [2024-11-20 13:24:15.930492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:34.436 [2024-11-20 13:24:15.930695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:34.436 [2024-11-20 13:24:15.930707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:34.436 [2024-11-20 13:24:15.930983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:34.436 [2024-11-20 13:24:15.931134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:34.436 [2024-11-20 13:24:15.931147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:34.436 [2024-11-20 13:24:15.931273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.436 "name": "raid_bdev1", 00:10:34.436 "uuid": "b25e045e-d772-4fef-b312-141ed3bf0137", 00:10:34.436 "strip_size_kb": 64, 00:10:34.436 "state": "online", 00:10:34.436 "raid_level": "concat", 00:10:34.436 "superblock": true, 00:10:34.436 "num_base_bdevs": 4, 00:10:34.436 "num_base_bdevs_discovered": 4, 00:10:34.436 "num_base_bdevs_operational": 4, 00:10:34.436 "base_bdevs_list": [ 00:10:34.436 { 00:10:34.436 "name": "BaseBdev1", 00:10:34.436 "uuid": "8a55972d-dba8-5f86-b165-592cd145ad07", 00:10:34.436 "is_configured": true, 00:10:34.436 "data_offset": 2048, 00:10:34.436 "data_size": 63488 00:10:34.436 }, 00:10:34.436 { 00:10:34.436 "name": "BaseBdev2", 00:10:34.436 "uuid": "9614d0ce-d109-5267-9395-8197b816cde4", 00:10:34.436 "is_configured": true, 00:10:34.436 "data_offset": 2048, 00:10:34.436 "data_size": 63488 00:10:34.436 }, 00:10:34.436 { 00:10:34.436 "name": "BaseBdev3", 00:10:34.436 "uuid": "45232417-86f7-536c-851a-c060aedaf9b6", 00:10:34.436 "is_configured": true, 00:10:34.436 "data_offset": 2048, 00:10:34.436 "data_size": 63488 00:10:34.436 }, 00:10:34.436 { 00:10:34.436 "name": "BaseBdev4", 00:10:34.436 "uuid": "59628945-dcc6-50d9-a4a2-4bdd5b363846", 00:10:34.436 "is_configured": true, 00:10:34.436 "data_offset": 2048, 00:10:34.436 "data_size": 63488 00:10:34.436 } 00:10:34.436 ] 00:10:34.436 }' 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.436 13:24:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.695 13:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:34.695 13:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:34.954 [2024-11-20 13:24:16.420104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.892 "name": "raid_bdev1", 00:10:35.892 "uuid": "b25e045e-d772-4fef-b312-141ed3bf0137", 00:10:35.892 "strip_size_kb": 64, 00:10:35.892 "state": "online", 00:10:35.892 "raid_level": "concat", 00:10:35.892 "superblock": true, 00:10:35.892 "num_base_bdevs": 4, 00:10:35.892 "num_base_bdevs_discovered": 4, 00:10:35.892 "num_base_bdevs_operational": 4, 00:10:35.892 "base_bdevs_list": [ 00:10:35.892 { 00:10:35.892 "name": "BaseBdev1", 00:10:35.892 "uuid": "8a55972d-dba8-5f86-b165-592cd145ad07", 00:10:35.892 "is_configured": true, 00:10:35.892 "data_offset": 2048, 00:10:35.892 "data_size": 63488 00:10:35.892 }, 00:10:35.892 { 00:10:35.892 "name": "BaseBdev2", 00:10:35.892 "uuid": "9614d0ce-d109-5267-9395-8197b816cde4", 00:10:35.892 "is_configured": true, 00:10:35.892 "data_offset": 2048, 00:10:35.892 "data_size": 63488 00:10:35.892 }, 00:10:35.892 { 00:10:35.892 "name": "BaseBdev3", 00:10:35.892 "uuid": "45232417-86f7-536c-851a-c060aedaf9b6", 00:10:35.892 "is_configured": true, 00:10:35.892 "data_offset": 2048, 00:10:35.892 "data_size": 63488 00:10:35.892 }, 00:10:35.892 { 00:10:35.892 "name": "BaseBdev4", 00:10:35.892 "uuid": "59628945-dcc6-50d9-a4a2-4bdd5b363846", 00:10:35.892 "is_configured": true, 00:10:35.892 "data_offset": 2048, 00:10:35.892 "data_size": 63488 00:10:35.892 } 00:10:35.892 ] 00:10:35.892 }' 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.892 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.152 [2024-11-20 13:24:17.772136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.152 [2024-11-20 13:24:17.772230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.152 [2024-11-20 13:24:17.774885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.152 [2024-11-20 13:24:17.774988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.152 [2024-11-20 13:24:17.775072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.152 [2024-11-20 13:24:17.775124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:36.152 { 00:10:36.152 "results": [ 00:10:36.152 { 00:10:36.152 "job": "raid_bdev1", 00:10:36.152 "core_mask": "0x1", 00:10:36.152 "workload": "randrw", 00:10:36.152 "percentage": 50, 00:10:36.152 "status": "finished", 00:10:36.152 "queue_depth": 1, 00:10:36.152 "io_size": 131072, 00:10:36.152 "runtime": 1.352735, 00:10:36.152 "iops": 16301.049355564837, 00:10:36.152 "mibps": 2037.6311694456047, 00:10:36.152 "io_failed": 1, 00:10:36.152 "io_timeout": 0, 00:10:36.152 "avg_latency_us": 85.0125561099331, 00:10:36.152 "min_latency_us": 26.494323144104804, 00:10:36.152 "max_latency_us": 1495.3082969432314 00:10:36.152 } 00:10:36.152 ], 00:10:36.152 "core_count": 1 00:10:36.152 } 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83540 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 83540 ']' 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 83540 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83540 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83540' 00:10:36.152 killing process with pid 83540 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 83540 00:10:36.152 [2024-11-20 13:24:17.819540] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.152 13:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 83540 00:10:36.412 [2024-11-20 13:24:17.855488] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.412 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BjWJuZAzB8 00:10:36.413 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:36.413 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:36.413 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:36.413 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:36.413 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.413 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:36.413 13:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:36.413 ************************************ 00:10:36.413 END TEST raid_write_error_test 00:10:36.413 ************************************ 00:10:36.413 00:10:36.413 real 0m3.255s 00:10:36.413 user 0m4.108s 00:10:36.413 sys 0m0.508s 00:10:36.413 13:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.413 13:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.672 13:24:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:36.672 13:24:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:36.672 13:24:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:36.672 13:24:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.672 13:24:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.672 ************************************ 00:10:36.672 START TEST raid_state_function_test 00:10:36.672 ************************************ 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83667 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83667' 00:10:36.672 Process raid pid: 83667 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83667 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83667 ']' 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.672 13:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.672 [2024-11-20 13:24:18.231520] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:36.672 [2024-11-20 13:24:18.231744] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.931 [2024-11-20 13:24:18.382510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.931 [2024-11-20 13:24:18.408970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.931 [2024-11-20 13:24:18.452541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.931 [2024-11-20 13:24:18.452582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.501 [2024-11-20 13:24:19.077968] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.501 [2024-11-20 13:24:19.078039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.501 [2024-11-20 13:24:19.078057] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.501 [2024-11-20 13:24:19.078067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.501 [2024-11-20 13:24:19.078073] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.501 [2024-11-20 13:24:19.078084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.501 [2024-11-20 13:24:19.078090] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:37.501 [2024-11-20 13:24:19.078098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.501 "name": "Existed_Raid", 00:10:37.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.501 "strip_size_kb": 0, 00:10:37.501 "state": "configuring", 00:10:37.501 "raid_level": "raid1", 00:10:37.501 "superblock": false, 00:10:37.501 "num_base_bdevs": 4, 00:10:37.501 "num_base_bdevs_discovered": 0, 00:10:37.501 "num_base_bdevs_operational": 4, 00:10:37.501 "base_bdevs_list": [ 00:10:37.501 { 00:10:37.501 "name": "BaseBdev1", 00:10:37.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.501 "is_configured": false, 00:10:37.501 "data_offset": 0, 00:10:37.501 "data_size": 0 00:10:37.501 }, 00:10:37.501 { 00:10:37.501 "name": "BaseBdev2", 00:10:37.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.501 "is_configured": false, 00:10:37.501 "data_offset": 0, 00:10:37.501 "data_size": 0 00:10:37.501 }, 00:10:37.501 { 00:10:37.501 "name": "BaseBdev3", 00:10:37.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.501 "is_configured": false, 00:10:37.501 "data_offset": 0, 00:10:37.501 "data_size": 0 00:10:37.501 }, 00:10:37.501 { 00:10:37.501 "name": "BaseBdev4", 00:10:37.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.501 "is_configured": false, 00:10:37.501 "data_offset": 0, 00:10:37.501 "data_size": 0 00:10:37.501 } 00:10:37.501 ] 00:10:37.501 }' 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.501 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.108 [2024-11-20 13:24:19.557074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.108 [2024-11-20 13:24:19.557160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.108 [2024-11-20 13:24:19.569054] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.108 [2024-11-20 13:24:19.569149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.108 [2024-11-20 13:24:19.569176] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.108 [2024-11-20 13:24:19.569200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.108 [2024-11-20 13:24:19.569218] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.108 [2024-11-20 13:24:19.569240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.108 [2024-11-20 13:24:19.569258] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.108 [2024-11-20 13:24:19.569279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.108 [2024-11-20 13:24:19.589944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.108 BaseBdev1 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.108 [ 00:10:38.108 { 00:10:38.108 "name": "BaseBdev1", 00:10:38.108 "aliases": [ 00:10:38.108 "f349416e-7e2c-4d3d-bacb-22949918c275" 00:10:38.108 ], 00:10:38.108 "product_name": "Malloc disk", 00:10:38.108 "block_size": 512, 00:10:38.108 "num_blocks": 65536, 00:10:38.108 "uuid": "f349416e-7e2c-4d3d-bacb-22949918c275", 00:10:38.108 "assigned_rate_limits": { 00:10:38.108 "rw_ios_per_sec": 0, 00:10:38.108 "rw_mbytes_per_sec": 0, 00:10:38.108 "r_mbytes_per_sec": 0, 00:10:38.108 "w_mbytes_per_sec": 0 00:10:38.108 }, 00:10:38.108 "claimed": true, 00:10:38.108 "claim_type": "exclusive_write", 00:10:38.108 "zoned": false, 00:10:38.108 "supported_io_types": { 00:10:38.108 "read": true, 00:10:38.108 "write": true, 00:10:38.108 "unmap": true, 00:10:38.108 "flush": true, 00:10:38.108 "reset": true, 00:10:38.108 "nvme_admin": false, 00:10:38.108 "nvme_io": false, 00:10:38.108 "nvme_io_md": false, 00:10:38.108 "write_zeroes": true, 00:10:38.108 "zcopy": true, 00:10:38.108 "get_zone_info": false, 00:10:38.108 "zone_management": false, 00:10:38.108 "zone_append": false, 00:10:38.108 "compare": false, 00:10:38.108 "compare_and_write": false, 00:10:38.108 "abort": true, 00:10:38.108 "seek_hole": false, 00:10:38.108 "seek_data": false, 00:10:38.108 "copy": true, 00:10:38.108 "nvme_iov_md": false 00:10:38.108 }, 00:10:38.108 "memory_domains": [ 00:10:38.108 { 00:10:38.108 "dma_device_id": "system", 00:10:38.108 "dma_device_type": 1 00:10:38.108 }, 00:10:38.108 { 00:10:38.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.108 "dma_device_type": 2 00:10:38.108 } 00:10:38.108 ], 00:10:38.108 "driver_specific": {} 00:10:38.108 } 00:10:38.108 ] 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.108 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.108 "name": "Existed_Raid", 00:10:38.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.109 "strip_size_kb": 0, 00:10:38.109 "state": "configuring", 00:10:38.109 "raid_level": "raid1", 00:10:38.109 "superblock": false, 00:10:38.109 "num_base_bdevs": 4, 00:10:38.109 "num_base_bdevs_discovered": 1, 00:10:38.109 "num_base_bdevs_operational": 4, 00:10:38.109 "base_bdevs_list": [ 00:10:38.109 { 00:10:38.109 "name": "BaseBdev1", 00:10:38.109 "uuid": "f349416e-7e2c-4d3d-bacb-22949918c275", 00:10:38.109 "is_configured": true, 00:10:38.109 "data_offset": 0, 00:10:38.109 "data_size": 65536 00:10:38.109 }, 00:10:38.109 { 00:10:38.109 "name": "BaseBdev2", 00:10:38.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.109 "is_configured": false, 00:10:38.109 "data_offset": 0, 00:10:38.109 "data_size": 0 00:10:38.109 }, 00:10:38.109 { 00:10:38.109 "name": "BaseBdev3", 00:10:38.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.109 "is_configured": false, 00:10:38.109 "data_offset": 0, 00:10:38.109 "data_size": 0 00:10:38.109 }, 00:10:38.109 { 00:10:38.109 "name": "BaseBdev4", 00:10:38.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.109 "is_configured": false, 00:10:38.109 "data_offset": 0, 00:10:38.109 "data_size": 0 00:10:38.109 } 00:10:38.109 ] 00:10:38.109 }' 00:10:38.109 13:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.109 13:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.678 [2024-11-20 13:24:20.073166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.678 [2024-11-20 13:24:20.073293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.678 [2024-11-20 13:24:20.085160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.678 [2024-11-20 13:24:20.086971] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.678 [2024-11-20 13:24:20.087019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.678 [2024-11-20 13:24:20.087030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.678 [2024-11-20 13:24:20.087039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.678 [2024-11-20 13:24:20.087045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.678 [2024-11-20 13:24:20.087053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.678 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.678 "name": "Existed_Raid", 00:10:38.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.679 "strip_size_kb": 0, 00:10:38.679 "state": "configuring", 00:10:38.679 "raid_level": "raid1", 00:10:38.679 "superblock": false, 00:10:38.679 "num_base_bdevs": 4, 00:10:38.679 "num_base_bdevs_discovered": 1, 00:10:38.679 "num_base_bdevs_operational": 4, 00:10:38.679 "base_bdevs_list": [ 00:10:38.679 { 00:10:38.679 "name": "BaseBdev1", 00:10:38.679 "uuid": "f349416e-7e2c-4d3d-bacb-22949918c275", 00:10:38.679 "is_configured": true, 00:10:38.679 "data_offset": 0, 00:10:38.679 "data_size": 65536 00:10:38.679 }, 00:10:38.679 { 00:10:38.679 "name": "BaseBdev2", 00:10:38.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.679 "is_configured": false, 00:10:38.679 "data_offset": 0, 00:10:38.679 "data_size": 0 00:10:38.679 }, 00:10:38.679 { 00:10:38.679 "name": "BaseBdev3", 00:10:38.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.679 "is_configured": false, 00:10:38.679 "data_offset": 0, 00:10:38.679 "data_size": 0 00:10:38.679 }, 00:10:38.679 { 00:10:38.679 "name": "BaseBdev4", 00:10:38.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.679 "is_configured": false, 00:10:38.679 "data_offset": 0, 00:10:38.679 "data_size": 0 00:10:38.679 } 00:10:38.679 ] 00:10:38.679 }' 00:10:38.679 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.679 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.939 BaseBdev2 00:10:38.939 [2024-11-20 13:24:20.527467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.939 [ 00:10:38.939 { 00:10:38.939 "name": "BaseBdev2", 00:10:38.939 "aliases": [ 00:10:38.939 "01a5dda0-d204-4591-b2d5-a04d325d36fe" 00:10:38.939 ], 00:10:38.939 "product_name": "Malloc disk", 00:10:38.939 "block_size": 512, 00:10:38.939 "num_blocks": 65536, 00:10:38.939 "uuid": "01a5dda0-d204-4591-b2d5-a04d325d36fe", 00:10:38.939 "assigned_rate_limits": { 00:10:38.939 "rw_ios_per_sec": 0, 00:10:38.939 "rw_mbytes_per_sec": 0, 00:10:38.939 "r_mbytes_per_sec": 0, 00:10:38.939 "w_mbytes_per_sec": 0 00:10:38.939 }, 00:10:38.939 "claimed": true, 00:10:38.939 "claim_type": "exclusive_write", 00:10:38.939 "zoned": false, 00:10:38.939 "supported_io_types": { 00:10:38.939 "read": true, 00:10:38.939 "write": true, 00:10:38.939 "unmap": true, 00:10:38.939 "flush": true, 00:10:38.939 "reset": true, 00:10:38.939 "nvme_admin": false, 00:10:38.939 "nvme_io": false, 00:10:38.939 "nvme_io_md": false, 00:10:38.939 "write_zeroes": true, 00:10:38.939 "zcopy": true, 00:10:38.939 "get_zone_info": false, 00:10:38.939 "zone_management": false, 00:10:38.939 "zone_append": false, 00:10:38.939 "compare": false, 00:10:38.939 "compare_and_write": false, 00:10:38.939 "abort": true, 00:10:38.939 "seek_hole": false, 00:10:38.939 "seek_data": false, 00:10:38.939 "copy": true, 00:10:38.939 "nvme_iov_md": false 00:10:38.939 }, 00:10:38.939 "memory_domains": [ 00:10:38.939 { 00:10:38.939 "dma_device_id": "system", 00:10:38.939 "dma_device_type": 1 00:10:38.939 }, 00:10:38.939 { 00:10:38.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.939 "dma_device_type": 2 00:10:38.939 } 00:10:38.939 ], 00:10:38.939 "driver_specific": {} 00:10:38.939 } 00:10:38.939 ] 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.939 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.199 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.199 "name": "Existed_Raid", 00:10:39.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.199 "strip_size_kb": 0, 00:10:39.199 "state": "configuring", 00:10:39.199 "raid_level": "raid1", 00:10:39.199 "superblock": false, 00:10:39.199 "num_base_bdevs": 4, 00:10:39.199 "num_base_bdevs_discovered": 2, 00:10:39.199 "num_base_bdevs_operational": 4, 00:10:39.199 "base_bdevs_list": [ 00:10:39.199 { 00:10:39.199 "name": "BaseBdev1", 00:10:39.199 "uuid": "f349416e-7e2c-4d3d-bacb-22949918c275", 00:10:39.199 "is_configured": true, 00:10:39.199 "data_offset": 0, 00:10:39.199 "data_size": 65536 00:10:39.199 }, 00:10:39.199 { 00:10:39.199 "name": "BaseBdev2", 00:10:39.199 "uuid": "01a5dda0-d204-4591-b2d5-a04d325d36fe", 00:10:39.199 "is_configured": true, 00:10:39.199 "data_offset": 0, 00:10:39.199 "data_size": 65536 00:10:39.199 }, 00:10:39.199 { 00:10:39.199 "name": "BaseBdev3", 00:10:39.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.199 "is_configured": false, 00:10:39.199 "data_offset": 0, 00:10:39.199 "data_size": 0 00:10:39.199 }, 00:10:39.199 { 00:10:39.199 "name": "BaseBdev4", 00:10:39.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.199 "is_configured": false, 00:10:39.199 "data_offset": 0, 00:10:39.199 "data_size": 0 00:10:39.199 } 00:10:39.199 ] 00:10:39.199 }' 00:10:39.199 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.199 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 13:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.459 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.459 13:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 [2024-11-20 13:24:21.016030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.459 BaseBdev3 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 [ 00:10:39.459 { 00:10:39.459 "name": "BaseBdev3", 00:10:39.459 "aliases": [ 00:10:39.459 "4a514a65-150f-4893-9428-d0dcb3f6bfec" 00:10:39.459 ], 00:10:39.459 "product_name": "Malloc disk", 00:10:39.459 "block_size": 512, 00:10:39.459 "num_blocks": 65536, 00:10:39.459 "uuid": "4a514a65-150f-4893-9428-d0dcb3f6bfec", 00:10:39.459 "assigned_rate_limits": { 00:10:39.459 "rw_ios_per_sec": 0, 00:10:39.459 "rw_mbytes_per_sec": 0, 00:10:39.459 "r_mbytes_per_sec": 0, 00:10:39.459 "w_mbytes_per_sec": 0 00:10:39.459 }, 00:10:39.459 "claimed": true, 00:10:39.459 "claim_type": "exclusive_write", 00:10:39.459 "zoned": false, 00:10:39.459 "supported_io_types": { 00:10:39.459 "read": true, 00:10:39.459 "write": true, 00:10:39.459 "unmap": true, 00:10:39.459 "flush": true, 00:10:39.459 "reset": true, 00:10:39.459 "nvme_admin": false, 00:10:39.459 "nvme_io": false, 00:10:39.459 "nvme_io_md": false, 00:10:39.459 "write_zeroes": true, 00:10:39.459 "zcopy": true, 00:10:39.459 "get_zone_info": false, 00:10:39.459 "zone_management": false, 00:10:39.459 "zone_append": false, 00:10:39.459 "compare": false, 00:10:39.459 "compare_and_write": false, 00:10:39.459 "abort": true, 00:10:39.459 "seek_hole": false, 00:10:39.459 "seek_data": false, 00:10:39.459 "copy": true, 00:10:39.459 "nvme_iov_md": false 00:10:39.459 }, 00:10:39.459 "memory_domains": [ 00:10:39.459 { 00:10:39.459 "dma_device_id": "system", 00:10:39.459 "dma_device_type": 1 00:10:39.459 }, 00:10:39.459 { 00:10:39.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.459 "dma_device_type": 2 00:10:39.459 } 00:10:39.459 ], 00:10:39.459 "driver_specific": {} 00:10:39.459 } 00:10:39.459 ] 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.459 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.459 "name": "Existed_Raid", 00:10:39.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.459 "strip_size_kb": 0, 00:10:39.459 "state": "configuring", 00:10:39.459 "raid_level": "raid1", 00:10:39.459 "superblock": false, 00:10:39.459 "num_base_bdevs": 4, 00:10:39.459 "num_base_bdevs_discovered": 3, 00:10:39.459 "num_base_bdevs_operational": 4, 00:10:39.459 "base_bdevs_list": [ 00:10:39.459 { 00:10:39.459 "name": "BaseBdev1", 00:10:39.459 "uuid": "f349416e-7e2c-4d3d-bacb-22949918c275", 00:10:39.459 "is_configured": true, 00:10:39.459 "data_offset": 0, 00:10:39.459 "data_size": 65536 00:10:39.459 }, 00:10:39.459 { 00:10:39.459 "name": "BaseBdev2", 00:10:39.459 "uuid": "01a5dda0-d204-4591-b2d5-a04d325d36fe", 00:10:39.459 "is_configured": true, 00:10:39.459 "data_offset": 0, 00:10:39.459 "data_size": 65536 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "name": "BaseBdev3", 00:10:39.460 "uuid": "4a514a65-150f-4893-9428-d0dcb3f6bfec", 00:10:39.460 "is_configured": true, 00:10:39.460 "data_offset": 0, 00:10:39.460 "data_size": 65536 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "name": "BaseBdev4", 00:10:39.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.460 "is_configured": false, 00:10:39.460 "data_offset": 0, 00:10:39.460 "data_size": 0 00:10:39.460 } 00:10:39.460 ] 00:10:39.460 }' 00:10:39.460 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.460 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.029 [2024-11-20 13:24:21.530367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:40.029 [2024-11-20 13:24:21.530481] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:40.029 [2024-11-20 13:24:21.530515] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:40.029 [2024-11-20 13:24:21.530840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:40.029 [2024-11-20 13:24:21.531051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:40.029 [2024-11-20 13:24:21.531099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:40.029 [2024-11-20 13:24:21.531361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.029 BaseBdev4 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.029 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.029 [ 00:10:40.029 { 00:10:40.029 "name": "BaseBdev4", 00:10:40.029 "aliases": [ 00:10:40.029 "f059f7e7-a7df-4713-9024-cbeeea15d003" 00:10:40.029 ], 00:10:40.029 "product_name": "Malloc disk", 00:10:40.029 "block_size": 512, 00:10:40.029 "num_blocks": 65536, 00:10:40.029 "uuid": "f059f7e7-a7df-4713-9024-cbeeea15d003", 00:10:40.029 "assigned_rate_limits": { 00:10:40.029 "rw_ios_per_sec": 0, 00:10:40.029 "rw_mbytes_per_sec": 0, 00:10:40.029 "r_mbytes_per_sec": 0, 00:10:40.029 "w_mbytes_per_sec": 0 00:10:40.029 }, 00:10:40.029 "claimed": true, 00:10:40.029 "claim_type": "exclusive_write", 00:10:40.029 "zoned": false, 00:10:40.029 "supported_io_types": { 00:10:40.029 "read": true, 00:10:40.029 "write": true, 00:10:40.029 "unmap": true, 00:10:40.029 "flush": true, 00:10:40.029 "reset": true, 00:10:40.029 "nvme_admin": false, 00:10:40.029 "nvme_io": false, 00:10:40.029 "nvme_io_md": false, 00:10:40.029 "write_zeroes": true, 00:10:40.029 "zcopy": true, 00:10:40.029 "get_zone_info": false, 00:10:40.029 "zone_management": false, 00:10:40.029 "zone_append": false, 00:10:40.029 "compare": false, 00:10:40.029 "compare_and_write": false, 00:10:40.029 "abort": true, 00:10:40.029 "seek_hole": false, 00:10:40.029 "seek_data": false, 00:10:40.029 "copy": true, 00:10:40.029 "nvme_iov_md": false 00:10:40.029 }, 00:10:40.029 "memory_domains": [ 00:10:40.029 { 00:10:40.029 "dma_device_id": "system", 00:10:40.029 "dma_device_type": 1 00:10:40.029 }, 00:10:40.029 { 00:10:40.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.030 "dma_device_type": 2 00:10:40.030 } 00:10:40.030 ], 00:10:40.030 "driver_specific": {} 00:10:40.030 } 00:10:40.030 ] 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.030 "name": "Existed_Raid", 00:10:40.030 "uuid": "40c10e55-e87e-4097-8565-2d38e1928fa7", 00:10:40.030 "strip_size_kb": 0, 00:10:40.030 "state": "online", 00:10:40.030 "raid_level": "raid1", 00:10:40.030 "superblock": false, 00:10:40.030 "num_base_bdevs": 4, 00:10:40.030 "num_base_bdevs_discovered": 4, 00:10:40.030 "num_base_bdevs_operational": 4, 00:10:40.030 "base_bdevs_list": [ 00:10:40.030 { 00:10:40.030 "name": "BaseBdev1", 00:10:40.030 "uuid": "f349416e-7e2c-4d3d-bacb-22949918c275", 00:10:40.030 "is_configured": true, 00:10:40.030 "data_offset": 0, 00:10:40.030 "data_size": 65536 00:10:40.030 }, 00:10:40.030 { 00:10:40.030 "name": "BaseBdev2", 00:10:40.030 "uuid": "01a5dda0-d204-4591-b2d5-a04d325d36fe", 00:10:40.030 "is_configured": true, 00:10:40.030 "data_offset": 0, 00:10:40.030 "data_size": 65536 00:10:40.030 }, 00:10:40.030 { 00:10:40.030 "name": "BaseBdev3", 00:10:40.030 "uuid": "4a514a65-150f-4893-9428-d0dcb3f6bfec", 00:10:40.030 "is_configured": true, 00:10:40.030 "data_offset": 0, 00:10:40.030 "data_size": 65536 00:10:40.030 }, 00:10:40.030 { 00:10:40.030 "name": "BaseBdev4", 00:10:40.030 "uuid": "f059f7e7-a7df-4713-9024-cbeeea15d003", 00:10:40.030 "is_configured": true, 00:10:40.030 "data_offset": 0, 00:10:40.030 "data_size": 65536 00:10:40.030 } 00:10:40.030 ] 00:10:40.030 }' 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.030 13:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.599 [2024-11-20 13:24:22.029916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.599 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.599 "name": "Existed_Raid", 00:10:40.599 "aliases": [ 00:10:40.599 "40c10e55-e87e-4097-8565-2d38e1928fa7" 00:10:40.599 ], 00:10:40.599 "product_name": "Raid Volume", 00:10:40.599 "block_size": 512, 00:10:40.599 "num_blocks": 65536, 00:10:40.599 "uuid": "40c10e55-e87e-4097-8565-2d38e1928fa7", 00:10:40.599 "assigned_rate_limits": { 00:10:40.599 "rw_ios_per_sec": 0, 00:10:40.599 "rw_mbytes_per_sec": 0, 00:10:40.599 "r_mbytes_per_sec": 0, 00:10:40.599 "w_mbytes_per_sec": 0 00:10:40.599 }, 00:10:40.599 "claimed": false, 00:10:40.599 "zoned": false, 00:10:40.599 "supported_io_types": { 00:10:40.599 "read": true, 00:10:40.599 "write": true, 00:10:40.599 "unmap": false, 00:10:40.599 "flush": false, 00:10:40.599 "reset": true, 00:10:40.599 "nvme_admin": false, 00:10:40.599 "nvme_io": false, 00:10:40.599 "nvme_io_md": false, 00:10:40.599 "write_zeroes": true, 00:10:40.599 "zcopy": false, 00:10:40.599 "get_zone_info": false, 00:10:40.599 "zone_management": false, 00:10:40.599 "zone_append": false, 00:10:40.599 "compare": false, 00:10:40.599 "compare_and_write": false, 00:10:40.599 "abort": false, 00:10:40.599 "seek_hole": false, 00:10:40.599 "seek_data": false, 00:10:40.599 "copy": false, 00:10:40.599 "nvme_iov_md": false 00:10:40.599 }, 00:10:40.599 "memory_domains": [ 00:10:40.599 { 00:10:40.599 "dma_device_id": "system", 00:10:40.599 "dma_device_type": 1 00:10:40.599 }, 00:10:40.599 { 00:10:40.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.599 "dma_device_type": 2 00:10:40.599 }, 00:10:40.599 { 00:10:40.599 "dma_device_id": "system", 00:10:40.599 "dma_device_type": 1 00:10:40.599 }, 00:10:40.599 { 00:10:40.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.599 "dma_device_type": 2 00:10:40.599 }, 00:10:40.599 { 00:10:40.599 "dma_device_id": "system", 00:10:40.599 "dma_device_type": 1 00:10:40.599 }, 00:10:40.599 { 00:10:40.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.599 "dma_device_type": 2 00:10:40.599 }, 00:10:40.599 { 00:10:40.599 "dma_device_id": "system", 00:10:40.599 "dma_device_type": 1 00:10:40.599 }, 00:10:40.600 { 00:10:40.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.600 "dma_device_type": 2 00:10:40.600 } 00:10:40.600 ], 00:10:40.600 "driver_specific": { 00:10:40.600 "raid": { 00:10:40.600 "uuid": "40c10e55-e87e-4097-8565-2d38e1928fa7", 00:10:40.600 "strip_size_kb": 0, 00:10:40.600 "state": "online", 00:10:40.600 "raid_level": "raid1", 00:10:40.600 "superblock": false, 00:10:40.600 "num_base_bdevs": 4, 00:10:40.600 "num_base_bdevs_discovered": 4, 00:10:40.600 "num_base_bdevs_operational": 4, 00:10:40.600 "base_bdevs_list": [ 00:10:40.600 { 00:10:40.600 "name": "BaseBdev1", 00:10:40.600 "uuid": "f349416e-7e2c-4d3d-bacb-22949918c275", 00:10:40.600 "is_configured": true, 00:10:40.600 "data_offset": 0, 00:10:40.600 "data_size": 65536 00:10:40.600 }, 00:10:40.600 { 00:10:40.600 "name": "BaseBdev2", 00:10:40.600 "uuid": "01a5dda0-d204-4591-b2d5-a04d325d36fe", 00:10:40.600 "is_configured": true, 00:10:40.600 "data_offset": 0, 00:10:40.600 "data_size": 65536 00:10:40.600 }, 00:10:40.600 { 00:10:40.600 "name": "BaseBdev3", 00:10:40.600 "uuid": "4a514a65-150f-4893-9428-d0dcb3f6bfec", 00:10:40.600 "is_configured": true, 00:10:40.600 "data_offset": 0, 00:10:40.600 "data_size": 65536 00:10:40.600 }, 00:10:40.600 { 00:10:40.600 "name": "BaseBdev4", 00:10:40.600 "uuid": "f059f7e7-a7df-4713-9024-cbeeea15d003", 00:10:40.600 "is_configured": true, 00:10:40.600 "data_offset": 0, 00:10:40.600 "data_size": 65536 00:10:40.600 } 00:10:40.600 ] 00:10:40.600 } 00:10:40.600 } 00:10:40.600 }' 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:40.600 BaseBdev2 00:10:40.600 BaseBdev3 00:10:40.600 BaseBdev4' 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.600 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.858 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.858 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.859 [2024-11-20 13:24:22.325110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.859 "name": "Existed_Raid", 00:10:40.859 "uuid": "40c10e55-e87e-4097-8565-2d38e1928fa7", 00:10:40.859 "strip_size_kb": 0, 00:10:40.859 "state": "online", 00:10:40.859 "raid_level": "raid1", 00:10:40.859 "superblock": false, 00:10:40.859 "num_base_bdevs": 4, 00:10:40.859 "num_base_bdevs_discovered": 3, 00:10:40.859 "num_base_bdevs_operational": 3, 00:10:40.859 "base_bdevs_list": [ 00:10:40.859 { 00:10:40.859 "name": null, 00:10:40.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.859 "is_configured": false, 00:10:40.859 "data_offset": 0, 00:10:40.859 "data_size": 65536 00:10:40.859 }, 00:10:40.859 { 00:10:40.859 "name": "BaseBdev2", 00:10:40.859 "uuid": "01a5dda0-d204-4591-b2d5-a04d325d36fe", 00:10:40.859 "is_configured": true, 00:10:40.859 "data_offset": 0, 00:10:40.859 "data_size": 65536 00:10:40.859 }, 00:10:40.859 { 00:10:40.859 "name": "BaseBdev3", 00:10:40.859 "uuid": "4a514a65-150f-4893-9428-d0dcb3f6bfec", 00:10:40.859 "is_configured": true, 00:10:40.859 "data_offset": 0, 00:10:40.859 "data_size": 65536 00:10:40.859 }, 00:10:40.859 { 00:10:40.859 "name": "BaseBdev4", 00:10:40.859 "uuid": "f059f7e7-a7df-4713-9024-cbeeea15d003", 00:10:40.859 "is_configured": true, 00:10:40.859 "data_offset": 0, 00:10:40.859 "data_size": 65536 00:10:40.859 } 00:10:40.859 ] 00:10:40.859 }' 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.859 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.128 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.128 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.128 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.128 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.128 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.128 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.128 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.388 [2024-11-20 13:24:22.815459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.388 [2024-11-20 13:24:22.882636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.388 [2024-11-20 13:24:22.941883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:41.388 [2024-11-20 13:24:22.942033] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.388 [2024-11-20 13:24:22.953630] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.388 [2024-11-20 13:24:22.953724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.388 [2024-11-20 13:24:22.953765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.388 13:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.388 BaseBdev2 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.388 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.388 [ 00:10:41.388 { 00:10:41.388 "name": "BaseBdev2", 00:10:41.388 "aliases": [ 00:10:41.388 "a23e4bfc-c81e-4e1c-9e88-2187643824b0" 00:10:41.388 ], 00:10:41.388 "product_name": "Malloc disk", 00:10:41.388 "block_size": 512, 00:10:41.388 "num_blocks": 65536, 00:10:41.388 "uuid": "a23e4bfc-c81e-4e1c-9e88-2187643824b0", 00:10:41.388 "assigned_rate_limits": { 00:10:41.388 "rw_ios_per_sec": 0, 00:10:41.388 "rw_mbytes_per_sec": 0, 00:10:41.388 "r_mbytes_per_sec": 0, 00:10:41.388 "w_mbytes_per_sec": 0 00:10:41.388 }, 00:10:41.388 "claimed": false, 00:10:41.388 "zoned": false, 00:10:41.388 "supported_io_types": { 00:10:41.388 "read": true, 00:10:41.388 "write": true, 00:10:41.388 "unmap": true, 00:10:41.388 "flush": true, 00:10:41.388 "reset": true, 00:10:41.388 "nvme_admin": false, 00:10:41.388 "nvme_io": false, 00:10:41.388 "nvme_io_md": false, 00:10:41.388 "write_zeroes": true, 00:10:41.388 "zcopy": true, 00:10:41.388 "get_zone_info": false, 00:10:41.388 "zone_management": false, 00:10:41.388 "zone_append": false, 00:10:41.388 "compare": false, 00:10:41.388 "compare_and_write": false, 00:10:41.388 "abort": true, 00:10:41.388 "seek_hole": false, 00:10:41.388 "seek_data": false, 00:10:41.388 "copy": true, 00:10:41.389 "nvme_iov_md": false 00:10:41.389 }, 00:10:41.648 "memory_domains": [ 00:10:41.648 { 00:10:41.648 "dma_device_id": "system", 00:10:41.648 "dma_device_type": 1 00:10:41.648 }, 00:10:41.648 { 00:10:41.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.648 "dma_device_type": 2 00:10:41.648 } 00:10:41.648 ], 00:10:41.648 "driver_specific": {} 00:10:41.648 } 00:10:41.648 ] 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.648 BaseBdev3 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.648 [ 00:10:41.648 { 00:10:41.648 "name": "BaseBdev3", 00:10:41.648 "aliases": [ 00:10:41.648 "43311b1d-82f7-4d72-a4b3-3e3cc37242f8" 00:10:41.648 ], 00:10:41.648 "product_name": "Malloc disk", 00:10:41.648 "block_size": 512, 00:10:41.648 "num_blocks": 65536, 00:10:41.648 "uuid": "43311b1d-82f7-4d72-a4b3-3e3cc37242f8", 00:10:41.648 "assigned_rate_limits": { 00:10:41.648 "rw_ios_per_sec": 0, 00:10:41.648 "rw_mbytes_per_sec": 0, 00:10:41.648 "r_mbytes_per_sec": 0, 00:10:41.648 "w_mbytes_per_sec": 0 00:10:41.648 }, 00:10:41.648 "claimed": false, 00:10:41.648 "zoned": false, 00:10:41.648 "supported_io_types": { 00:10:41.648 "read": true, 00:10:41.648 "write": true, 00:10:41.648 "unmap": true, 00:10:41.648 "flush": true, 00:10:41.648 "reset": true, 00:10:41.648 "nvme_admin": false, 00:10:41.648 "nvme_io": false, 00:10:41.648 "nvme_io_md": false, 00:10:41.648 "write_zeroes": true, 00:10:41.648 "zcopy": true, 00:10:41.648 "get_zone_info": false, 00:10:41.648 "zone_management": false, 00:10:41.648 "zone_append": false, 00:10:41.648 "compare": false, 00:10:41.648 "compare_and_write": false, 00:10:41.648 "abort": true, 00:10:41.648 "seek_hole": false, 00:10:41.648 "seek_data": false, 00:10:41.648 "copy": true, 00:10:41.648 "nvme_iov_md": false 00:10:41.648 }, 00:10:41.648 "memory_domains": [ 00:10:41.648 { 00:10:41.648 "dma_device_id": "system", 00:10:41.648 "dma_device_type": 1 00:10:41.648 }, 00:10:41.648 { 00:10:41.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.648 "dma_device_type": 2 00:10:41.648 } 00:10:41.648 ], 00:10:41.648 "driver_specific": {} 00:10:41.648 } 00:10:41.648 ] 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.648 BaseBdev4 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.648 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.648 [ 00:10:41.648 { 00:10:41.648 "name": "BaseBdev4", 00:10:41.648 "aliases": [ 00:10:41.648 "867a6aa6-4f06-4e89-82e0-028a61557d6d" 00:10:41.648 ], 00:10:41.648 "product_name": "Malloc disk", 00:10:41.648 "block_size": 512, 00:10:41.648 "num_blocks": 65536, 00:10:41.648 "uuid": "867a6aa6-4f06-4e89-82e0-028a61557d6d", 00:10:41.648 "assigned_rate_limits": { 00:10:41.648 "rw_ios_per_sec": 0, 00:10:41.648 "rw_mbytes_per_sec": 0, 00:10:41.648 "r_mbytes_per_sec": 0, 00:10:41.648 "w_mbytes_per_sec": 0 00:10:41.648 }, 00:10:41.648 "claimed": false, 00:10:41.648 "zoned": false, 00:10:41.648 "supported_io_types": { 00:10:41.648 "read": true, 00:10:41.648 "write": true, 00:10:41.648 "unmap": true, 00:10:41.648 "flush": true, 00:10:41.648 "reset": true, 00:10:41.648 "nvme_admin": false, 00:10:41.648 "nvme_io": false, 00:10:41.648 "nvme_io_md": false, 00:10:41.648 "write_zeroes": true, 00:10:41.648 "zcopy": true, 00:10:41.649 "get_zone_info": false, 00:10:41.649 "zone_management": false, 00:10:41.649 "zone_append": false, 00:10:41.649 "compare": false, 00:10:41.649 "compare_and_write": false, 00:10:41.649 "abort": true, 00:10:41.649 "seek_hole": false, 00:10:41.649 "seek_data": false, 00:10:41.649 "copy": true, 00:10:41.649 "nvme_iov_md": false 00:10:41.649 }, 00:10:41.649 "memory_domains": [ 00:10:41.649 { 00:10:41.649 "dma_device_id": "system", 00:10:41.649 "dma_device_type": 1 00:10:41.649 }, 00:10:41.649 { 00:10:41.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.649 "dma_device_type": 2 00:10:41.649 } 00:10:41.649 ], 00:10:41.649 "driver_specific": {} 00:10:41.649 } 00:10:41.649 ] 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.649 [2024-11-20 13:24:23.174559] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.649 [2024-11-20 13:24:23.174665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.649 [2024-11-20 13:24:23.174710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.649 [2024-11-20 13:24:23.176564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.649 [2024-11-20 13:24:23.176647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.649 "name": "Existed_Raid", 00:10:41.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.649 "strip_size_kb": 0, 00:10:41.649 "state": "configuring", 00:10:41.649 "raid_level": "raid1", 00:10:41.649 "superblock": false, 00:10:41.649 "num_base_bdevs": 4, 00:10:41.649 "num_base_bdevs_discovered": 3, 00:10:41.649 "num_base_bdevs_operational": 4, 00:10:41.649 "base_bdevs_list": [ 00:10:41.649 { 00:10:41.649 "name": "BaseBdev1", 00:10:41.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.649 "is_configured": false, 00:10:41.649 "data_offset": 0, 00:10:41.649 "data_size": 0 00:10:41.649 }, 00:10:41.649 { 00:10:41.649 "name": "BaseBdev2", 00:10:41.649 "uuid": "a23e4bfc-c81e-4e1c-9e88-2187643824b0", 00:10:41.649 "is_configured": true, 00:10:41.649 "data_offset": 0, 00:10:41.649 "data_size": 65536 00:10:41.649 }, 00:10:41.649 { 00:10:41.649 "name": "BaseBdev3", 00:10:41.649 "uuid": "43311b1d-82f7-4d72-a4b3-3e3cc37242f8", 00:10:41.649 "is_configured": true, 00:10:41.649 "data_offset": 0, 00:10:41.649 "data_size": 65536 00:10:41.649 }, 00:10:41.649 { 00:10:41.649 "name": "BaseBdev4", 00:10:41.649 "uuid": "867a6aa6-4f06-4e89-82e0-028a61557d6d", 00:10:41.649 "is_configured": true, 00:10:41.649 "data_offset": 0, 00:10:41.649 "data_size": 65536 00:10:41.649 } 00:10:41.649 ] 00:10:41.649 }' 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.649 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.217 [2024-11-20 13:24:23.601869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.217 "name": "Existed_Raid", 00:10:42.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.217 "strip_size_kb": 0, 00:10:42.217 "state": "configuring", 00:10:42.217 "raid_level": "raid1", 00:10:42.217 "superblock": false, 00:10:42.217 "num_base_bdevs": 4, 00:10:42.217 "num_base_bdevs_discovered": 2, 00:10:42.217 "num_base_bdevs_operational": 4, 00:10:42.217 "base_bdevs_list": [ 00:10:42.217 { 00:10:42.217 "name": "BaseBdev1", 00:10:42.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.217 "is_configured": false, 00:10:42.217 "data_offset": 0, 00:10:42.217 "data_size": 0 00:10:42.217 }, 00:10:42.217 { 00:10:42.217 "name": null, 00:10:42.217 "uuid": "a23e4bfc-c81e-4e1c-9e88-2187643824b0", 00:10:42.217 "is_configured": false, 00:10:42.217 "data_offset": 0, 00:10:42.217 "data_size": 65536 00:10:42.217 }, 00:10:42.217 { 00:10:42.217 "name": "BaseBdev3", 00:10:42.217 "uuid": "43311b1d-82f7-4d72-a4b3-3e3cc37242f8", 00:10:42.217 "is_configured": true, 00:10:42.217 "data_offset": 0, 00:10:42.217 "data_size": 65536 00:10:42.217 }, 00:10:42.217 { 00:10:42.217 "name": "BaseBdev4", 00:10:42.217 "uuid": "867a6aa6-4f06-4e89-82e0-028a61557d6d", 00:10:42.217 "is_configured": true, 00:10:42.217 "data_offset": 0, 00:10:42.217 "data_size": 65536 00:10:42.217 } 00:10:42.217 ] 00:10:42.217 }' 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.217 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.476 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.476 13:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.477 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.477 13:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.477 [2024-11-20 13:24:24.048386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.477 BaseBdev1 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.477 [ 00:10:42.477 { 00:10:42.477 "name": "BaseBdev1", 00:10:42.477 "aliases": [ 00:10:42.477 "59576eab-dc9a-4311-910a-3511db73eafa" 00:10:42.477 ], 00:10:42.477 "product_name": "Malloc disk", 00:10:42.477 "block_size": 512, 00:10:42.477 "num_blocks": 65536, 00:10:42.477 "uuid": "59576eab-dc9a-4311-910a-3511db73eafa", 00:10:42.477 "assigned_rate_limits": { 00:10:42.477 "rw_ios_per_sec": 0, 00:10:42.477 "rw_mbytes_per_sec": 0, 00:10:42.477 "r_mbytes_per_sec": 0, 00:10:42.477 "w_mbytes_per_sec": 0 00:10:42.477 }, 00:10:42.477 "claimed": true, 00:10:42.477 "claim_type": "exclusive_write", 00:10:42.477 "zoned": false, 00:10:42.477 "supported_io_types": { 00:10:42.477 "read": true, 00:10:42.477 "write": true, 00:10:42.477 "unmap": true, 00:10:42.477 "flush": true, 00:10:42.477 "reset": true, 00:10:42.477 "nvme_admin": false, 00:10:42.477 "nvme_io": false, 00:10:42.477 "nvme_io_md": false, 00:10:42.477 "write_zeroes": true, 00:10:42.477 "zcopy": true, 00:10:42.477 "get_zone_info": false, 00:10:42.477 "zone_management": false, 00:10:42.477 "zone_append": false, 00:10:42.477 "compare": false, 00:10:42.477 "compare_and_write": false, 00:10:42.477 "abort": true, 00:10:42.477 "seek_hole": false, 00:10:42.477 "seek_data": false, 00:10:42.477 "copy": true, 00:10:42.477 "nvme_iov_md": false 00:10:42.477 }, 00:10:42.477 "memory_domains": [ 00:10:42.477 { 00:10:42.477 "dma_device_id": "system", 00:10:42.477 "dma_device_type": 1 00:10:42.477 }, 00:10:42.477 { 00:10:42.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.477 "dma_device_type": 2 00:10:42.477 } 00:10:42.477 ], 00:10:42.477 "driver_specific": {} 00:10:42.477 } 00:10:42.477 ] 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.477 "name": "Existed_Raid", 00:10:42.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.477 "strip_size_kb": 0, 00:10:42.477 "state": "configuring", 00:10:42.477 "raid_level": "raid1", 00:10:42.477 "superblock": false, 00:10:42.477 "num_base_bdevs": 4, 00:10:42.477 "num_base_bdevs_discovered": 3, 00:10:42.477 "num_base_bdevs_operational": 4, 00:10:42.477 "base_bdevs_list": [ 00:10:42.477 { 00:10:42.477 "name": "BaseBdev1", 00:10:42.477 "uuid": "59576eab-dc9a-4311-910a-3511db73eafa", 00:10:42.477 "is_configured": true, 00:10:42.477 "data_offset": 0, 00:10:42.477 "data_size": 65536 00:10:42.477 }, 00:10:42.477 { 00:10:42.477 "name": null, 00:10:42.477 "uuid": "a23e4bfc-c81e-4e1c-9e88-2187643824b0", 00:10:42.477 "is_configured": false, 00:10:42.477 "data_offset": 0, 00:10:42.477 "data_size": 65536 00:10:42.477 }, 00:10:42.477 { 00:10:42.477 "name": "BaseBdev3", 00:10:42.477 "uuid": "43311b1d-82f7-4d72-a4b3-3e3cc37242f8", 00:10:42.477 "is_configured": true, 00:10:42.477 "data_offset": 0, 00:10:42.477 "data_size": 65536 00:10:42.477 }, 00:10:42.477 { 00:10:42.477 "name": "BaseBdev4", 00:10:42.477 "uuid": "867a6aa6-4f06-4e89-82e0-028a61557d6d", 00:10:42.477 "is_configured": true, 00:10:42.477 "data_offset": 0, 00:10:42.477 "data_size": 65536 00:10:42.477 } 00:10:42.477 ] 00:10:42.477 }' 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.477 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.045 [2024-11-20 13:24:24.599562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.045 "name": "Existed_Raid", 00:10:43.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.045 "strip_size_kb": 0, 00:10:43.045 "state": "configuring", 00:10:43.045 "raid_level": "raid1", 00:10:43.045 "superblock": false, 00:10:43.045 "num_base_bdevs": 4, 00:10:43.045 "num_base_bdevs_discovered": 2, 00:10:43.045 "num_base_bdevs_operational": 4, 00:10:43.045 "base_bdevs_list": [ 00:10:43.045 { 00:10:43.045 "name": "BaseBdev1", 00:10:43.045 "uuid": "59576eab-dc9a-4311-910a-3511db73eafa", 00:10:43.045 "is_configured": true, 00:10:43.045 "data_offset": 0, 00:10:43.045 "data_size": 65536 00:10:43.045 }, 00:10:43.045 { 00:10:43.045 "name": null, 00:10:43.045 "uuid": "a23e4bfc-c81e-4e1c-9e88-2187643824b0", 00:10:43.045 "is_configured": false, 00:10:43.045 "data_offset": 0, 00:10:43.045 "data_size": 65536 00:10:43.045 }, 00:10:43.045 { 00:10:43.045 "name": null, 00:10:43.045 "uuid": "43311b1d-82f7-4d72-a4b3-3e3cc37242f8", 00:10:43.045 "is_configured": false, 00:10:43.045 "data_offset": 0, 00:10:43.045 "data_size": 65536 00:10:43.045 }, 00:10:43.045 { 00:10:43.045 "name": "BaseBdev4", 00:10:43.045 "uuid": "867a6aa6-4f06-4e89-82e0-028a61557d6d", 00:10:43.045 "is_configured": true, 00:10:43.045 "data_offset": 0, 00:10:43.045 "data_size": 65536 00:10:43.045 } 00:10:43.045 ] 00:10:43.045 }' 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.045 13:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.613 [2024-11-20 13:24:25.110663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.613 "name": "Existed_Raid", 00:10:43.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.613 "strip_size_kb": 0, 00:10:43.613 "state": "configuring", 00:10:43.613 "raid_level": "raid1", 00:10:43.613 "superblock": false, 00:10:43.613 "num_base_bdevs": 4, 00:10:43.613 "num_base_bdevs_discovered": 3, 00:10:43.613 "num_base_bdevs_operational": 4, 00:10:43.613 "base_bdevs_list": [ 00:10:43.613 { 00:10:43.613 "name": "BaseBdev1", 00:10:43.613 "uuid": "59576eab-dc9a-4311-910a-3511db73eafa", 00:10:43.613 "is_configured": true, 00:10:43.613 "data_offset": 0, 00:10:43.613 "data_size": 65536 00:10:43.613 }, 00:10:43.613 { 00:10:43.613 "name": null, 00:10:43.613 "uuid": "a23e4bfc-c81e-4e1c-9e88-2187643824b0", 00:10:43.613 "is_configured": false, 00:10:43.613 "data_offset": 0, 00:10:43.613 "data_size": 65536 00:10:43.613 }, 00:10:43.613 { 00:10:43.613 "name": "BaseBdev3", 00:10:43.613 "uuid": "43311b1d-82f7-4d72-a4b3-3e3cc37242f8", 00:10:43.613 "is_configured": true, 00:10:43.613 "data_offset": 0, 00:10:43.613 "data_size": 65536 00:10:43.613 }, 00:10:43.613 { 00:10:43.613 "name": "BaseBdev4", 00:10:43.613 "uuid": "867a6aa6-4f06-4e89-82e0-028a61557d6d", 00:10:43.613 "is_configured": true, 00:10:43.613 "data_offset": 0, 00:10:43.613 "data_size": 65536 00:10:43.613 } 00:10:43.613 ] 00:10:43.613 }' 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.613 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.872 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.872 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.872 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.872 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.131 [2024-11-20 13:24:25.585921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.131 "name": "Existed_Raid", 00:10:44.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.131 "strip_size_kb": 0, 00:10:44.131 "state": "configuring", 00:10:44.131 "raid_level": "raid1", 00:10:44.131 "superblock": false, 00:10:44.131 "num_base_bdevs": 4, 00:10:44.131 "num_base_bdevs_discovered": 2, 00:10:44.131 "num_base_bdevs_operational": 4, 00:10:44.131 "base_bdevs_list": [ 00:10:44.131 { 00:10:44.131 "name": null, 00:10:44.131 "uuid": "59576eab-dc9a-4311-910a-3511db73eafa", 00:10:44.131 "is_configured": false, 00:10:44.131 "data_offset": 0, 00:10:44.131 "data_size": 65536 00:10:44.131 }, 00:10:44.131 { 00:10:44.131 "name": null, 00:10:44.131 "uuid": "a23e4bfc-c81e-4e1c-9e88-2187643824b0", 00:10:44.131 "is_configured": false, 00:10:44.131 "data_offset": 0, 00:10:44.131 "data_size": 65536 00:10:44.131 }, 00:10:44.131 { 00:10:44.131 "name": "BaseBdev3", 00:10:44.131 "uuid": "43311b1d-82f7-4d72-a4b3-3e3cc37242f8", 00:10:44.131 "is_configured": true, 00:10:44.131 "data_offset": 0, 00:10:44.131 "data_size": 65536 00:10:44.131 }, 00:10:44.131 { 00:10:44.131 "name": "BaseBdev4", 00:10:44.131 "uuid": "867a6aa6-4f06-4e89-82e0-028a61557d6d", 00:10:44.131 "is_configured": true, 00:10:44.131 "data_offset": 0, 00:10:44.131 "data_size": 65536 00:10:44.131 } 00:10:44.131 ] 00:10:44.131 }' 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.131 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.389 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.389 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.389 13:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.390 13:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:44.390 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.390 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:44.390 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:44.390 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.390 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.390 [2024-11-20 13:24:26.051784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.648 "name": "Existed_Raid", 00:10:44.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.648 "strip_size_kb": 0, 00:10:44.648 "state": "configuring", 00:10:44.648 "raid_level": "raid1", 00:10:44.648 "superblock": false, 00:10:44.648 "num_base_bdevs": 4, 00:10:44.648 "num_base_bdevs_discovered": 3, 00:10:44.648 "num_base_bdevs_operational": 4, 00:10:44.648 "base_bdevs_list": [ 00:10:44.648 { 00:10:44.648 "name": null, 00:10:44.648 "uuid": "59576eab-dc9a-4311-910a-3511db73eafa", 00:10:44.648 "is_configured": false, 00:10:44.648 "data_offset": 0, 00:10:44.648 "data_size": 65536 00:10:44.648 }, 00:10:44.648 { 00:10:44.648 "name": "BaseBdev2", 00:10:44.648 "uuid": "a23e4bfc-c81e-4e1c-9e88-2187643824b0", 00:10:44.648 "is_configured": true, 00:10:44.648 "data_offset": 0, 00:10:44.648 "data_size": 65536 00:10:44.648 }, 00:10:44.648 { 00:10:44.648 "name": "BaseBdev3", 00:10:44.648 "uuid": "43311b1d-82f7-4d72-a4b3-3e3cc37242f8", 00:10:44.648 "is_configured": true, 00:10:44.648 "data_offset": 0, 00:10:44.648 "data_size": 65536 00:10:44.648 }, 00:10:44.648 { 00:10:44.648 "name": "BaseBdev4", 00:10:44.648 "uuid": "867a6aa6-4f06-4e89-82e0-028a61557d6d", 00:10:44.648 "is_configured": true, 00:10:44.648 "data_offset": 0, 00:10:44.648 "data_size": 65536 00:10:44.648 } 00:10:44.648 ] 00:10:44.648 }' 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.648 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 59576eab-dc9a-4311-910a-3511db73eafa 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.907 NewBaseBdev 00:10:44.907 [2024-11-20 13:24:26.538324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:44.907 [2024-11-20 13:24:26.538368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:44.907 [2024-11-20 13:24:26.538377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:44.907 [2024-11-20 13:24:26.538605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:44.907 [2024-11-20 13:24:26.538732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:44.907 [2024-11-20 13:24:26.538741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:44.907 [2024-11-20 13:24:26.538936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.907 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.907 [ 00:10:44.907 { 00:10:44.907 "name": "NewBaseBdev", 00:10:44.907 "aliases": [ 00:10:44.907 "59576eab-dc9a-4311-910a-3511db73eafa" 00:10:44.907 ], 00:10:44.907 "product_name": "Malloc disk", 00:10:44.907 "block_size": 512, 00:10:44.907 "num_blocks": 65536, 00:10:44.908 "uuid": "59576eab-dc9a-4311-910a-3511db73eafa", 00:10:44.908 "assigned_rate_limits": { 00:10:44.908 "rw_ios_per_sec": 0, 00:10:44.908 "rw_mbytes_per_sec": 0, 00:10:44.908 "r_mbytes_per_sec": 0, 00:10:44.908 "w_mbytes_per_sec": 0 00:10:44.908 }, 00:10:44.908 "claimed": true, 00:10:44.908 "claim_type": "exclusive_write", 00:10:44.908 "zoned": false, 00:10:44.908 "supported_io_types": { 00:10:44.908 "read": true, 00:10:44.908 "write": true, 00:10:44.908 "unmap": true, 00:10:44.908 "flush": true, 00:10:44.908 "reset": true, 00:10:44.908 "nvme_admin": false, 00:10:44.908 "nvme_io": false, 00:10:44.908 "nvme_io_md": false, 00:10:44.908 "write_zeroes": true, 00:10:44.908 "zcopy": true, 00:10:44.908 "get_zone_info": false, 00:10:44.908 "zone_management": false, 00:10:44.908 "zone_append": false, 00:10:44.908 "compare": false, 00:10:44.908 "compare_and_write": false, 00:10:44.908 "abort": true, 00:10:44.908 "seek_hole": false, 00:10:44.908 "seek_data": false, 00:10:44.908 "copy": true, 00:10:44.908 "nvme_iov_md": false 00:10:44.908 }, 00:10:44.908 "memory_domains": [ 00:10:44.908 { 00:10:44.908 "dma_device_id": "system", 00:10:44.908 "dma_device_type": 1 00:10:44.908 }, 00:10:44.908 { 00:10:44.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.908 "dma_device_type": 2 00:10:44.908 } 00:10:44.908 ], 00:10:44.908 "driver_specific": {} 00:10:44.908 } 00:10:44.908 ] 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.908 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.167 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.167 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.167 "name": "Existed_Raid", 00:10:45.167 "uuid": "76528f5b-cee9-4986-a7f0-47dc979cb1cb", 00:10:45.167 "strip_size_kb": 0, 00:10:45.167 "state": "online", 00:10:45.167 "raid_level": "raid1", 00:10:45.167 "superblock": false, 00:10:45.167 "num_base_bdevs": 4, 00:10:45.167 "num_base_bdevs_discovered": 4, 00:10:45.167 "num_base_bdevs_operational": 4, 00:10:45.167 "base_bdevs_list": [ 00:10:45.167 { 00:10:45.167 "name": "NewBaseBdev", 00:10:45.167 "uuid": "59576eab-dc9a-4311-910a-3511db73eafa", 00:10:45.167 "is_configured": true, 00:10:45.167 "data_offset": 0, 00:10:45.167 "data_size": 65536 00:10:45.167 }, 00:10:45.167 { 00:10:45.167 "name": "BaseBdev2", 00:10:45.167 "uuid": "a23e4bfc-c81e-4e1c-9e88-2187643824b0", 00:10:45.167 "is_configured": true, 00:10:45.167 "data_offset": 0, 00:10:45.167 "data_size": 65536 00:10:45.167 }, 00:10:45.167 { 00:10:45.167 "name": "BaseBdev3", 00:10:45.167 "uuid": "43311b1d-82f7-4d72-a4b3-3e3cc37242f8", 00:10:45.167 "is_configured": true, 00:10:45.167 "data_offset": 0, 00:10:45.167 "data_size": 65536 00:10:45.167 }, 00:10:45.167 { 00:10:45.167 "name": "BaseBdev4", 00:10:45.167 "uuid": "867a6aa6-4f06-4e89-82e0-028a61557d6d", 00:10:45.167 "is_configured": true, 00:10:45.167 "data_offset": 0, 00:10:45.167 "data_size": 65536 00:10:45.167 } 00:10:45.167 ] 00:10:45.167 }' 00:10:45.167 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.167 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.460 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.460 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.460 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.460 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.460 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.460 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.460 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.460 13:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.460 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.460 13:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.460 [2024-11-20 13:24:26.985956] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.460 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.460 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.460 "name": "Existed_Raid", 00:10:45.460 "aliases": [ 00:10:45.460 "76528f5b-cee9-4986-a7f0-47dc979cb1cb" 00:10:45.460 ], 00:10:45.460 "product_name": "Raid Volume", 00:10:45.460 "block_size": 512, 00:10:45.460 "num_blocks": 65536, 00:10:45.460 "uuid": "76528f5b-cee9-4986-a7f0-47dc979cb1cb", 00:10:45.460 "assigned_rate_limits": { 00:10:45.460 "rw_ios_per_sec": 0, 00:10:45.460 "rw_mbytes_per_sec": 0, 00:10:45.460 "r_mbytes_per_sec": 0, 00:10:45.460 "w_mbytes_per_sec": 0 00:10:45.460 }, 00:10:45.460 "claimed": false, 00:10:45.460 "zoned": false, 00:10:45.460 "supported_io_types": { 00:10:45.460 "read": true, 00:10:45.460 "write": true, 00:10:45.460 "unmap": false, 00:10:45.460 "flush": false, 00:10:45.460 "reset": true, 00:10:45.460 "nvme_admin": false, 00:10:45.460 "nvme_io": false, 00:10:45.460 "nvme_io_md": false, 00:10:45.460 "write_zeroes": true, 00:10:45.460 "zcopy": false, 00:10:45.460 "get_zone_info": false, 00:10:45.460 "zone_management": false, 00:10:45.460 "zone_append": false, 00:10:45.460 "compare": false, 00:10:45.460 "compare_and_write": false, 00:10:45.460 "abort": false, 00:10:45.460 "seek_hole": false, 00:10:45.460 "seek_data": false, 00:10:45.460 "copy": false, 00:10:45.460 "nvme_iov_md": false 00:10:45.460 }, 00:10:45.460 "memory_domains": [ 00:10:45.460 { 00:10:45.460 "dma_device_id": "system", 00:10:45.460 "dma_device_type": 1 00:10:45.460 }, 00:10:45.460 { 00:10:45.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.460 "dma_device_type": 2 00:10:45.460 }, 00:10:45.460 { 00:10:45.460 "dma_device_id": "system", 00:10:45.460 "dma_device_type": 1 00:10:45.460 }, 00:10:45.460 { 00:10:45.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.460 "dma_device_type": 2 00:10:45.460 }, 00:10:45.460 { 00:10:45.460 "dma_device_id": "system", 00:10:45.460 "dma_device_type": 1 00:10:45.460 }, 00:10:45.460 { 00:10:45.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.460 "dma_device_type": 2 00:10:45.460 }, 00:10:45.460 { 00:10:45.460 "dma_device_id": "system", 00:10:45.460 "dma_device_type": 1 00:10:45.460 }, 00:10:45.460 { 00:10:45.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.460 "dma_device_type": 2 00:10:45.460 } 00:10:45.460 ], 00:10:45.460 "driver_specific": { 00:10:45.460 "raid": { 00:10:45.460 "uuid": "76528f5b-cee9-4986-a7f0-47dc979cb1cb", 00:10:45.460 "strip_size_kb": 0, 00:10:45.460 "state": "online", 00:10:45.460 "raid_level": "raid1", 00:10:45.460 "superblock": false, 00:10:45.460 "num_base_bdevs": 4, 00:10:45.460 "num_base_bdevs_discovered": 4, 00:10:45.460 "num_base_bdevs_operational": 4, 00:10:45.460 "base_bdevs_list": [ 00:10:45.460 { 00:10:45.460 "name": "NewBaseBdev", 00:10:45.460 "uuid": "59576eab-dc9a-4311-910a-3511db73eafa", 00:10:45.460 "is_configured": true, 00:10:45.460 "data_offset": 0, 00:10:45.460 "data_size": 65536 00:10:45.460 }, 00:10:45.460 { 00:10:45.460 "name": "BaseBdev2", 00:10:45.460 "uuid": "a23e4bfc-c81e-4e1c-9e88-2187643824b0", 00:10:45.460 "is_configured": true, 00:10:45.460 "data_offset": 0, 00:10:45.460 "data_size": 65536 00:10:45.460 }, 00:10:45.460 { 00:10:45.461 "name": "BaseBdev3", 00:10:45.461 "uuid": "43311b1d-82f7-4d72-a4b3-3e3cc37242f8", 00:10:45.461 "is_configured": true, 00:10:45.461 "data_offset": 0, 00:10:45.461 "data_size": 65536 00:10:45.461 }, 00:10:45.461 { 00:10:45.461 "name": "BaseBdev4", 00:10:45.461 "uuid": "867a6aa6-4f06-4e89-82e0-028a61557d6d", 00:10:45.461 "is_configured": true, 00:10:45.461 "data_offset": 0, 00:10:45.461 "data_size": 65536 00:10:45.461 } 00:10:45.461 ] 00:10:45.461 } 00:10:45.461 } 00:10:45.461 }' 00:10:45.461 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.461 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:45.461 BaseBdev2 00:10:45.461 BaseBdev3 00:10:45.461 BaseBdev4' 00:10:45.461 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.461 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.461 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.461 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:45.461 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.461 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.461 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.719 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.720 [2024-11-20 13:24:27.305104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.720 [2024-11-20 13:24:27.305135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.720 [2024-11-20 13:24:27.305225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.720 [2024-11-20 13:24:27.305523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.720 [2024-11-20 13:24:27.305541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83667 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83667 ']' 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83667 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83667 00:10:45.720 killing process with pid 83667 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83667' 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 83667 00:10:45.720 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 83667 00:10:45.720 [2024-11-20 13:24:27.337059] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.720 [2024-11-20 13:24:27.378835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.978 13:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:45.978 00:10:45.978 real 0m9.451s 00:10:45.978 user 0m16.261s 00:10:45.978 sys 0m1.850s 00:10:45.978 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.978 ************************************ 00:10:45.978 END TEST raid_state_function_test 00:10:45.978 ************************************ 00:10:45.978 13:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.978 13:24:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:45.978 13:24:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:45.978 13:24:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.978 13:24:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.237 ************************************ 00:10:46.237 START TEST raid_state_function_test_sb 00:10:46.237 ************************************ 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:46.237 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84321 00:10:46.238 Process raid pid: 84321 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84321' 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84321 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84321 ']' 00:10:46.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.238 13:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.238 [2024-11-20 13:24:27.749693] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:46.238 [2024-11-20 13:24:27.749842] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.497 [2024-11-20 13:24:27.911892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.497 [2024-11-20 13:24:27.939604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.497 [2024-11-20 13:24:27.984047] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.497 [2024-11-20 13:24:27.984090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.065 [2024-11-20 13:24:28.574750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.065 [2024-11-20 13:24:28.574865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.065 [2024-11-20 13:24:28.574875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.065 [2024-11-20 13:24:28.574885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.065 [2024-11-20 13:24:28.574891] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.065 [2024-11-20 13:24:28.574901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.065 [2024-11-20 13:24:28.574907] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:47.065 [2024-11-20 13:24:28.574915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.065 "name": "Existed_Raid", 00:10:47.065 "uuid": "227d1308-5874-463e-9883-d8641ee4d657", 00:10:47.065 "strip_size_kb": 0, 00:10:47.065 "state": "configuring", 00:10:47.065 "raid_level": "raid1", 00:10:47.065 "superblock": true, 00:10:47.065 "num_base_bdevs": 4, 00:10:47.065 "num_base_bdevs_discovered": 0, 00:10:47.065 "num_base_bdevs_operational": 4, 00:10:47.065 "base_bdevs_list": [ 00:10:47.065 { 00:10:47.065 "name": "BaseBdev1", 00:10:47.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.065 "is_configured": false, 00:10:47.065 "data_offset": 0, 00:10:47.065 "data_size": 0 00:10:47.065 }, 00:10:47.065 { 00:10:47.065 "name": "BaseBdev2", 00:10:47.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.065 "is_configured": false, 00:10:47.065 "data_offset": 0, 00:10:47.065 "data_size": 0 00:10:47.065 }, 00:10:47.065 { 00:10:47.065 "name": "BaseBdev3", 00:10:47.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.065 "is_configured": false, 00:10:47.065 "data_offset": 0, 00:10:47.065 "data_size": 0 00:10:47.065 }, 00:10:47.065 { 00:10:47.065 "name": "BaseBdev4", 00:10:47.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.065 "is_configured": false, 00:10:47.065 "data_offset": 0, 00:10:47.065 "data_size": 0 00:10:47.065 } 00:10:47.065 ] 00:10:47.065 }' 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.065 13:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.635 [2024-11-20 13:24:29.021931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.635 [2024-11-20 13:24:29.021971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.635 [2024-11-20 13:24:29.029926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.635 [2024-11-20 13:24:29.029968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.635 [2024-11-20 13:24:29.029977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.635 [2024-11-20 13:24:29.029985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.635 [2024-11-20 13:24:29.030005] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.635 [2024-11-20 13:24:29.030014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.635 [2024-11-20 13:24:29.030021] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:47.635 [2024-11-20 13:24:29.030029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.635 [2024-11-20 13:24:29.046834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.635 BaseBdev1 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.635 [ 00:10:47.635 { 00:10:47.635 "name": "BaseBdev1", 00:10:47.635 "aliases": [ 00:10:47.635 "5db3b909-80d2-40d5-82a9-ae1014f61874" 00:10:47.635 ], 00:10:47.635 "product_name": "Malloc disk", 00:10:47.635 "block_size": 512, 00:10:47.635 "num_blocks": 65536, 00:10:47.635 "uuid": "5db3b909-80d2-40d5-82a9-ae1014f61874", 00:10:47.635 "assigned_rate_limits": { 00:10:47.635 "rw_ios_per_sec": 0, 00:10:47.635 "rw_mbytes_per_sec": 0, 00:10:47.635 "r_mbytes_per_sec": 0, 00:10:47.635 "w_mbytes_per_sec": 0 00:10:47.635 }, 00:10:47.635 "claimed": true, 00:10:47.635 "claim_type": "exclusive_write", 00:10:47.635 "zoned": false, 00:10:47.635 "supported_io_types": { 00:10:47.635 "read": true, 00:10:47.635 "write": true, 00:10:47.635 "unmap": true, 00:10:47.635 "flush": true, 00:10:47.635 "reset": true, 00:10:47.635 "nvme_admin": false, 00:10:47.635 "nvme_io": false, 00:10:47.635 "nvme_io_md": false, 00:10:47.635 "write_zeroes": true, 00:10:47.635 "zcopy": true, 00:10:47.635 "get_zone_info": false, 00:10:47.635 "zone_management": false, 00:10:47.635 "zone_append": false, 00:10:47.635 "compare": false, 00:10:47.635 "compare_and_write": false, 00:10:47.635 "abort": true, 00:10:47.635 "seek_hole": false, 00:10:47.635 "seek_data": false, 00:10:47.635 "copy": true, 00:10:47.635 "nvme_iov_md": false 00:10:47.635 }, 00:10:47.635 "memory_domains": [ 00:10:47.635 { 00:10:47.635 "dma_device_id": "system", 00:10:47.635 "dma_device_type": 1 00:10:47.635 }, 00:10:47.635 { 00:10:47.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.635 "dma_device_type": 2 00:10:47.635 } 00:10:47.635 ], 00:10:47.635 "driver_specific": {} 00:10:47.635 } 00:10:47.635 ] 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.635 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.636 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.636 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.636 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.636 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.636 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.636 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.636 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.636 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.636 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.636 "name": "Existed_Raid", 00:10:47.636 "uuid": "76f70557-8d16-4adf-9956-bd9113ba7a17", 00:10:47.636 "strip_size_kb": 0, 00:10:47.636 "state": "configuring", 00:10:47.636 "raid_level": "raid1", 00:10:47.636 "superblock": true, 00:10:47.636 "num_base_bdevs": 4, 00:10:47.636 "num_base_bdevs_discovered": 1, 00:10:47.636 "num_base_bdevs_operational": 4, 00:10:47.636 "base_bdevs_list": [ 00:10:47.636 { 00:10:47.636 "name": "BaseBdev1", 00:10:47.636 "uuid": "5db3b909-80d2-40d5-82a9-ae1014f61874", 00:10:47.636 "is_configured": true, 00:10:47.636 "data_offset": 2048, 00:10:47.636 "data_size": 63488 00:10:47.636 }, 00:10:47.636 { 00:10:47.636 "name": "BaseBdev2", 00:10:47.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.636 "is_configured": false, 00:10:47.636 "data_offset": 0, 00:10:47.636 "data_size": 0 00:10:47.636 }, 00:10:47.636 { 00:10:47.636 "name": "BaseBdev3", 00:10:47.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.636 "is_configured": false, 00:10:47.636 "data_offset": 0, 00:10:47.636 "data_size": 0 00:10:47.636 }, 00:10:47.636 { 00:10:47.636 "name": "BaseBdev4", 00:10:47.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.636 "is_configured": false, 00:10:47.636 "data_offset": 0, 00:10:47.636 "data_size": 0 00:10:47.636 } 00:10:47.636 ] 00:10:47.636 }' 00:10:47.636 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.636 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.921 [2024-11-20 13:24:29.514091] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.921 [2024-11-20 13:24:29.514206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.921 [2024-11-20 13:24:29.522107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.921 [2024-11-20 13:24:29.524045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.921 [2024-11-20 13:24:29.524134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.921 [2024-11-20 13:24:29.524168] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.921 [2024-11-20 13:24:29.524192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.921 [2024-11-20 13:24:29.524214] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:47.921 [2024-11-20 13:24:29.524236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.921 "name": "Existed_Raid", 00:10:47.921 "uuid": "853a42e4-79be-499b-adcd-b0f6f95fc843", 00:10:47.921 "strip_size_kb": 0, 00:10:47.921 "state": "configuring", 00:10:47.921 "raid_level": "raid1", 00:10:47.921 "superblock": true, 00:10:47.921 "num_base_bdevs": 4, 00:10:47.921 "num_base_bdevs_discovered": 1, 00:10:47.921 "num_base_bdevs_operational": 4, 00:10:47.921 "base_bdevs_list": [ 00:10:47.921 { 00:10:47.921 "name": "BaseBdev1", 00:10:47.921 "uuid": "5db3b909-80d2-40d5-82a9-ae1014f61874", 00:10:47.921 "is_configured": true, 00:10:47.921 "data_offset": 2048, 00:10:47.921 "data_size": 63488 00:10:47.921 }, 00:10:47.921 { 00:10:47.921 "name": "BaseBdev2", 00:10:47.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.921 "is_configured": false, 00:10:47.921 "data_offset": 0, 00:10:47.921 "data_size": 0 00:10:47.921 }, 00:10:47.921 { 00:10:47.921 "name": "BaseBdev3", 00:10:47.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.921 "is_configured": false, 00:10:47.921 "data_offset": 0, 00:10:47.921 "data_size": 0 00:10:47.921 }, 00:10:47.921 { 00:10:47.921 "name": "BaseBdev4", 00:10:47.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.921 "is_configured": false, 00:10:47.921 "data_offset": 0, 00:10:47.921 "data_size": 0 00:10:47.921 } 00:10:47.921 ] 00:10:47.921 }' 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.921 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.489 [2024-11-20 13:24:29.976425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.489 BaseBdev2 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.489 [ 00:10:48.489 { 00:10:48.489 "name": "BaseBdev2", 00:10:48.489 "aliases": [ 00:10:48.489 "d9ecafb9-64cf-4399-9c8d-f780a2629554" 00:10:48.489 ], 00:10:48.489 "product_name": "Malloc disk", 00:10:48.489 "block_size": 512, 00:10:48.489 "num_blocks": 65536, 00:10:48.489 "uuid": "d9ecafb9-64cf-4399-9c8d-f780a2629554", 00:10:48.489 "assigned_rate_limits": { 00:10:48.489 "rw_ios_per_sec": 0, 00:10:48.489 "rw_mbytes_per_sec": 0, 00:10:48.489 "r_mbytes_per_sec": 0, 00:10:48.489 "w_mbytes_per_sec": 0 00:10:48.489 }, 00:10:48.489 "claimed": true, 00:10:48.489 "claim_type": "exclusive_write", 00:10:48.489 "zoned": false, 00:10:48.489 "supported_io_types": { 00:10:48.489 "read": true, 00:10:48.489 "write": true, 00:10:48.489 "unmap": true, 00:10:48.489 "flush": true, 00:10:48.489 "reset": true, 00:10:48.489 "nvme_admin": false, 00:10:48.489 "nvme_io": false, 00:10:48.489 "nvme_io_md": false, 00:10:48.489 "write_zeroes": true, 00:10:48.489 "zcopy": true, 00:10:48.489 "get_zone_info": false, 00:10:48.489 "zone_management": false, 00:10:48.489 "zone_append": false, 00:10:48.489 "compare": false, 00:10:48.489 "compare_and_write": false, 00:10:48.489 "abort": true, 00:10:48.489 "seek_hole": false, 00:10:48.489 "seek_data": false, 00:10:48.489 "copy": true, 00:10:48.489 "nvme_iov_md": false 00:10:48.489 }, 00:10:48.489 "memory_domains": [ 00:10:48.489 { 00:10:48.489 "dma_device_id": "system", 00:10:48.489 "dma_device_type": 1 00:10:48.489 }, 00:10:48.489 { 00:10:48.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.489 "dma_device_type": 2 00:10:48.489 } 00:10:48.489 ], 00:10:48.489 "driver_specific": {} 00:10:48.489 } 00:10:48.489 ] 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.489 13:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.489 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.489 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.489 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.489 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.489 "name": "Existed_Raid", 00:10:48.489 "uuid": "853a42e4-79be-499b-adcd-b0f6f95fc843", 00:10:48.489 "strip_size_kb": 0, 00:10:48.489 "state": "configuring", 00:10:48.489 "raid_level": "raid1", 00:10:48.489 "superblock": true, 00:10:48.489 "num_base_bdevs": 4, 00:10:48.489 "num_base_bdevs_discovered": 2, 00:10:48.489 "num_base_bdevs_operational": 4, 00:10:48.489 "base_bdevs_list": [ 00:10:48.489 { 00:10:48.489 "name": "BaseBdev1", 00:10:48.489 "uuid": "5db3b909-80d2-40d5-82a9-ae1014f61874", 00:10:48.489 "is_configured": true, 00:10:48.489 "data_offset": 2048, 00:10:48.489 "data_size": 63488 00:10:48.489 }, 00:10:48.489 { 00:10:48.489 "name": "BaseBdev2", 00:10:48.489 "uuid": "d9ecafb9-64cf-4399-9c8d-f780a2629554", 00:10:48.489 "is_configured": true, 00:10:48.489 "data_offset": 2048, 00:10:48.489 "data_size": 63488 00:10:48.489 }, 00:10:48.489 { 00:10:48.489 "name": "BaseBdev3", 00:10:48.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.489 "is_configured": false, 00:10:48.489 "data_offset": 0, 00:10:48.489 "data_size": 0 00:10:48.489 }, 00:10:48.489 { 00:10:48.489 "name": "BaseBdev4", 00:10:48.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.489 "is_configured": false, 00:10:48.489 "data_offset": 0, 00:10:48.490 "data_size": 0 00:10:48.490 } 00:10:48.490 ] 00:10:48.490 }' 00:10:48.490 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.490 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.057 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.058 [2024-11-20 13:24:30.456382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.058 BaseBdev3 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.058 [ 00:10:49.058 { 00:10:49.058 "name": "BaseBdev3", 00:10:49.058 "aliases": [ 00:10:49.058 "890518cc-dda3-4ab6-8f11-642f9fa8aa37" 00:10:49.058 ], 00:10:49.058 "product_name": "Malloc disk", 00:10:49.058 "block_size": 512, 00:10:49.058 "num_blocks": 65536, 00:10:49.058 "uuid": "890518cc-dda3-4ab6-8f11-642f9fa8aa37", 00:10:49.058 "assigned_rate_limits": { 00:10:49.058 "rw_ios_per_sec": 0, 00:10:49.058 "rw_mbytes_per_sec": 0, 00:10:49.058 "r_mbytes_per_sec": 0, 00:10:49.058 "w_mbytes_per_sec": 0 00:10:49.058 }, 00:10:49.058 "claimed": true, 00:10:49.058 "claim_type": "exclusive_write", 00:10:49.058 "zoned": false, 00:10:49.058 "supported_io_types": { 00:10:49.058 "read": true, 00:10:49.058 "write": true, 00:10:49.058 "unmap": true, 00:10:49.058 "flush": true, 00:10:49.058 "reset": true, 00:10:49.058 "nvme_admin": false, 00:10:49.058 "nvme_io": false, 00:10:49.058 "nvme_io_md": false, 00:10:49.058 "write_zeroes": true, 00:10:49.058 "zcopy": true, 00:10:49.058 "get_zone_info": false, 00:10:49.058 "zone_management": false, 00:10:49.058 "zone_append": false, 00:10:49.058 "compare": false, 00:10:49.058 "compare_and_write": false, 00:10:49.058 "abort": true, 00:10:49.058 "seek_hole": false, 00:10:49.058 "seek_data": false, 00:10:49.058 "copy": true, 00:10:49.058 "nvme_iov_md": false 00:10:49.058 }, 00:10:49.058 "memory_domains": [ 00:10:49.058 { 00:10:49.058 "dma_device_id": "system", 00:10:49.058 "dma_device_type": 1 00:10:49.058 }, 00:10:49.058 { 00:10:49.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.058 "dma_device_type": 2 00:10:49.058 } 00:10:49.058 ], 00:10:49.058 "driver_specific": {} 00:10:49.058 } 00:10:49.058 ] 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.058 "name": "Existed_Raid", 00:10:49.058 "uuid": "853a42e4-79be-499b-adcd-b0f6f95fc843", 00:10:49.058 "strip_size_kb": 0, 00:10:49.058 "state": "configuring", 00:10:49.058 "raid_level": "raid1", 00:10:49.058 "superblock": true, 00:10:49.058 "num_base_bdevs": 4, 00:10:49.058 "num_base_bdevs_discovered": 3, 00:10:49.058 "num_base_bdevs_operational": 4, 00:10:49.058 "base_bdevs_list": [ 00:10:49.058 { 00:10:49.058 "name": "BaseBdev1", 00:10:49.058 "uuid": "5db3b909-80d2-40d5-82a9-ae1014f61874", 00:10:49.058 "is_configured": true, 00:10:49.058 "data_offset": 2048, 00:10:49.058 "data_size": 63488 00:10:49.058 }, 00:10:49.058 { 00:10:49.058 "name": "BaseBdev2", 00:10:49.058 "uuid": "d9ecafb9-64cf-4399-9c8d-f780a2629554", 00:10:49.058 "is_configured": true, 00:10:49.058 "data_offset": 2048, 00:10:49.058 "data_size": 63488 00:10:49.058 }, 00:10:49.058 { 00:10:49.058 "name": "BaseBdev3", 00:10:49.058 "uuid": "890518cc-dda3-4ab6-8f11-642f9fa8aa37", 00:10:49.058 "is_configured": true, 00:10:49.058 "data_offset": 2048, 00:10:49.058 "data_size": 63488 00:10:49.058 }, 00:10:49.058 { 00:10:49.058 "name": "BaseBdev4", 00:10:49.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.058 "is_configured": false, 00:10:49.058 "data_offset": 0, 00:10:49.058 "data_size": 0 00:10:49.058 } 00:10:49.058 ] 00:10:49.058 }' 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.058 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.318 BaseBdev4 00:10:49.318 [2024-11-20 13:24:30.950804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:49.318 [2024-11-20 13:24:30.951026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:49.318 [2024-11-20 13:24:30.951041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:49.318 [2024-11-20 13:24:30.951350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:49.318 [2024-11-20 13:24:30.951514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:49.318 [2024-11-20 13:24:30.951527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:49.318 [2024-11-20 13:24:30.951680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.318 [ 00:10:49.318 { 00:10:49.318 "name": "BaseBdev4", 00:10:49.318 "aliases": [ 00:10:49.318 "6e7f1700-2145-456e-a5ae-b9da4e886d42" 00:10:49.318 ], 00:10:49.318 "product_name": "Malloc disk", 00:10:49.318 "block_size": 512, 00:10:49.318 "num_blocks": 65536, 00:10:49.318 "uuid": "6e7f1700-2145-456e-a5ae-b9da4e886d42", 00:10:49.318 "assigned_rate_limits": { 00:10:49.318 "rw_ios_per_sec": 0, 00:10:49.318 "rw_mbytes_per_sec": 0, 00:10:49.318 "r_mbytes_per_sec": 0, 00:10:49.318 "w_mbytes_per_sec": 0 00:10:49.318 }, 00:10:49.318 "claimed": true, 00:10:49.318 "claim_type": "exclusive_write", 00:10:49.318 "zoned": false, 00:10:49.318 "supported_io_types": { 00:10:49.318 "read": true, 00:10:49.318 "write": true, 00:10:49.318 "unmap": true, 00:10:49.318 "flush": true, 00:10:49.318 "reset": true, 00:10:49.318 "nvme_admin": false, 00:10:49.318 "nvme_io": false, 00:10:49.318 "nvme_io_md": false, 00:10:49.318 "write_zeroes": true, 00:10:49.318 "zcopy": true, 00:10:49.318 "get_zone_info": false, 00:10:49.318 "zone_management": false, 00:10:49.318 "zone_append": false, 00:10:49.318 "compare": false, 00:10:49.318 "compare_and_write": false, 00:10:49.318 "abort": true, 00:10:49.318 "seek_hole": false, 00:10:49.318 "seek_data": false, 00:10:49.318 "copy": true, 00:10:49.318 "nvme_iov_md": false 00:10:49.318 }, 00:10:49.318 "memory_domains": [ 00:10:49.318 { 00:10:49.318 "dma_device_id": "system", 00:10:49.318 "dma_device_type": 1 00:10:49.318 }, 00:10:49.318 { 00:10:49.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.318 "dma_device_type": 2 00:10:49.318 } 00:10:49.318 ], 00:10:49.318 "driver_specific": {} 00:10:49.318 } 00:10:49.318 ] 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.318 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.577 13:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.577 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.577 "name": "Existed_Raid", 00:10:49.577 "uuid": "853a42e4-79be-499b-adcd-b0f6f95fc843", 00:10:49.577 "strip_size_kb": 0, 00:10:49.577 "state": "online", 00:10:49.577 "raid_level": "raid1", 00:10:49.577 "superblock": true, 00:10:49.577 "num_base_bdevs": 4, 00:10:49.577 "num_base_bdevs_discovered": 4, 00:10:49.577 "num_base_bdevs_operational": 4, 00:10:49.577 "base_bdevs_list": [ 00:10:49.577 { 00:10:49.577 "name": "BaseBdev1", 00:10:49.577 "uuid": "5db3b909-80d2-40d5-82a9-ae1014f61874", 00:10:49.577 "is_configured": true, 00:10:49.577 "data_offset": 2048, 00:10:49.577 "data_size": 63488 00:10:49.577 }, 00:10:49.577 { 00:10:49.577 "name": "BaseBdev2", 00:10:49.577 "uuid": "d9ecafb9-64cf-4399-9c8d-f780a2629554", 00:10:49.577 "is_configured": true, 00:10:49.577 "data_offset": 2048, 00:10:49.577 "data_size": 63488 00:10:49.577 }, 00:10:49.577 { 00:10:49.577 "name": "BaseBdev3", 00:10:49.577 "uuid": "890518cc-dda3-4ab6-8f11-642f9fa8aa37", 00:10:49.577 "is_configured": true, 00:10:49.577 "data_offset": 2048, 00:10:49.577 "data_size": 63488 00:10:49.577 }, 00:10:49.577 { 00:10:49.577 "name": "BaseBdev4", 00:10:49.577 "uuid": "6e7f1700-2145-456e-a5ae-b9da4e886d42", 00:10:49.577 "is_configured": true, 00:10:49.577 "data_offset": 2048, 00:10:49.577 "data_size": 63488 00:10:49.577 } 00:10:49.577 ] 00:10:49.577 }' 00:10:49.577 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.577 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.837 [2024-11-20 13:24:31.442423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.837 "name": "Existed_Raid", 00:10:49.837 "aliases": [ 00:10:49.837 "853a42e4-79be-499b-adcd-b0f6f95fc843" 00:10:49.837 ], 00:10:49.837 "product_name": "Raid Volume", 00:10:49.837 "block_size": 512, 00:10:49.837 "num_blocks": 63488, 00:10:49.837 "uuid": "853a42e4-79be-499b-adcd-b0f6f95fc843", 00:10:49.837 "assigned_rate_limits": { 00:10:49.837 "rw_ios_per_sec": 0, 00:10:49.837 "rw_mbytes_per_sec": 0, 00:10:49.837 "r_mbytes_per_sec": 0, 00:10:49.837 "w_mbytes_per_sec": 0 00:10:49.837 }, 00:10:49.837 "claimed": false, 00:10:49.837 "zoned": false, 00:10:49.837 "supported_io_types": { 00:10:49.837 "read": true, 00:10:49.837 "write": true, 00:10:49.837 "unmap": false, 00:10:49.837 "flush": false, 00:10:49.837 "reset": true, 00:10:49.837 "nvme_admin": false, 00:10:49.837 "nvme_io": false, 00:10:49.837 "nvme_io_md": false, 00:10:49.837 "write_zeroes": true, 00:10:49.837 "zcopy": false, 00:10:49.837 "get_zone_info": false, 00:10:49.837 "zone_management": false, 00:10:49.837 "zone_append": false, 00:10:49.837 "compare": false, 00:10:49.837 "compare_and_write": false, 00:10:49.837 "abort": false, 00:10:49.837 "seek_hole": false, 00:10:49.837 "seek_data": false, 00:10:49.837 "copy": false, 00:10:49.837 "nvme_iov_md": false 00:10:49.837 }, 00:10:49.837 "memory_domains": [ 00:10:49.837 { 00:10:49.837 "dma_device_id": "system", 00:10:49.837 "dma_device_type": 1 00:10:49.837 }, 00:10:49.837 { 00:10:49.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.837 "dma_device_type": 2 00:10:49.837 }, 00:10:49.837 { 00:10:49.837 "dma_device_id": "system", 00:10:49.837 "dma_device_type": 1 00:10:49.837 }, 00:10:49.837 { 00:10:49.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.837 "dma_device_type": 2 00:10:49.837 }, 00:10:49.837 { 00:10:49.837 "dma_device_id": "system", 00:10:49.837 "dma_device_type": 1 00:10:49.837 }, 00:10:49.837 { 00:10:49.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.837 "dma_device_type": 2 00:10:49.837 }, 00:10:49.837 { 00:10:49.837 "dma_device_id": "system", 00:10:49.837 "dma_device_type": 1 00:10:49.837 }, 00:10:49.837 { 00:10:49.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.837 "dma_device_type": 2 00:10:49.837 } 00:10:49.837 ], 00:10:49.837 "driver_specific": { 00:10:49.837 "raid": { 00:10:49.837 "uuid": "853a42e4-79be-499b-adcd-b0f6f95fc843", 00:10:49.837 "strip_size_kb": 0, 00:10:49.837 "state": "online", 00:10:49.837 "raid_level": "raid1", 00:10:49.837 "superblock": true, 00:10:49.837 "num_base_bdevs": 4, 00:10:49.837 "num_base_bdevs_discovered": 4, 00:10:49.837 "num_base_bdevs_operational": 4, 00:10:49.837 "base_bdevs_list": [ 00:10:49.837 { 00:10:49.837 "name": "BaseBdev1", 00:10:49.837 "uuid": "5db3b909-80d2-40d5-82a9-ae1014f61874", 00:10:49.837 "is_configured": true, 00:10:49.837 "data_offset": 2048, 00:10:49.837 "data_size": 63488 00:10:49.837 }, 00:10:49.837 { 00:10:49.837 "name": "BaseBdev2", 00:10:49.837 "uuid": "d9ecafb9-64cf-4399-9c8d-f780a2629554", 00:10:49.837 "is_configured": true, 00:10:49.837 "data_offset": 2048, 00:10:49.837 "data_size": 63488 00:10:49.837 }, 00:10:49.837 { 00:10:49.837 "name": "BaseBdev3", 00:10:49.837 "uuid": "890518cc-dda3-4ab6-8f11-642f9fa8aa37", 00:10:49.837 "is_configured": true, 00:10:49.837 "data_offset": 2048, 00:10:49.837 "data_size": 63488 00:10:49.837 }, 00:10:49.837 { 00:10:49.837 "name": "BaseBdev4", 00:10:49.837 "uuid": "6e7f1700-2145-456e-a5ae-b9da4e886d42", 00:10:49.837 "is_configured": true, 00:10:49.837 "data_offset": 2048, 00:10:49.837 "data_size": 63488 00:10:49.837 } 00:10:49.837 ] 00:10:49.837 } 00:10:49.837 } 00:10:49.837 }' 00:10:49.837 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:50.096 BaseBdev2 00:10:50.096 BaseBdev3 00:10:50.096 BaseBdev4' 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.096 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.097 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.097 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.097 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.097 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.097 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:50.097 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.097 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.097 [2024-11-20 13:24:31.761574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.356 "name": "Existed_Raid", 00:10:50.356 "uuid": "853a42e4-79be-499b-adcd-b0f6f95fc843", 00:10:50.356 "strip_size_kb": 0, 00:10:50.356 "state": "online", 00:10:50.356 "raid_level": "raid1", 00:10:50.356 "superblock": true, 00:10:50.356 "num_base_bdevs": 4, 00:10:50.356 "num_base_bdevs_discovered": 3, 00:10:50.356 "num_base_bdevs_operational": 3, 00:10:50.356 "base_bdevs_list": [ 00:10:50.356 { 00:10:50.356 "name": null, 00:10:50.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.356 "is_configured": false, 00:10:50.356 "data_offset": 0, 00:10:50.356 "data_size": 63488 00:10:50.356 }, 00:10:50.356 { 00:10:50.356 "name": "BaseBdev2", 00:10:50.356 "uuid": "d9ecafb9-64cf-4399-9c8d-f780a2629554", 00:10:50.356 "is_configured": true, 00:10:50.356 "data_offset": 2048, 00:10:50.356 "data_size": 63488 00:10:50.356 }, 00:10:50.356 { 00:10:50.356 "name": "BaseBdev3", 00:10:50.356 "uuid": "890518cc-dda3-4ab6-8f11-642f9fa8aa37", 00:10:50.356 "is_configured": true, 00:10:50.356 "data_offset": 2048, 00:10:50.356 "data_size": 63488 00:10:50.356 }, 00:10:50.356 { 00:10:50.356 "name": "BaseBdev4", 00:10:50.356 "uuid": "6e7f1700-2145-456e-a5ae-b9da4e886d42", 00:10:50.356 "is_configured": true, 00:10:50.356 "data_offset": 2048, 00:10:50.356 "data_size": 63488 00:10:50.356 } 00:10:50.356 ] 00:10:50.356 }' 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.356 13:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.615 [2024-11-20 13:24:32.264475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.615 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.877 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.877 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.877 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.877 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:50.877 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.877 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.878 [2024-11-20 13:24:32.327633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.878 [2024-11-20 13:24:32.398951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:50.878 [2024-11-20 13:24:32.399151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.878 [2024-11-20 13:24:32.410807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.878 [2024-11-20 13:24:32.410923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.878 [2024-11-20 13:24:32.410970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.878 BaseBdev2 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.878 [ 00:10:50.878 { 00:10:50.878 "name": "BaseBdev2", 00:10:50.878 "aliases": [ 00:10:50.878 "a83c4d90-9113-42cc-afee-5a0d5abb2b07" 00:10:50.878 ], 00:10:50.878 "product_name": "Malloc disk", 00:10:50.878 "block_size": 512, 00:10:50.878 "num_blocks": 65536, 00:10:50.878 "uuid": "a83c4d90-9113-42cc-afee-5a0d5abb2b07", 00:10:50.878 "assigned_rate_limits": { 00:10:50.878 "rw_ios_per_sec": 0, 00:10:50.878 "rw_mbytes_per_sec": 0, 00:10:50.878 "r_mbytes_per_sec": 0, 00:10:50.878 "w_mbytes_per_sec": 0 00:10:50.878 }, 00:10:50.878 "claimed": false, 00:10:50.878 "zoned": false, 00:10:50.878 "supported_io_types": { 00:10:50.878 "read": true, 00:10:50.878 "write": true, 00:10:50.878 "unmap": true, 00:10:50.878 "flush": true, 00:10:50.878 "reset": true, 00:10:50.878 "nvme_admin": false, 00:10:50.878 "nvme_io": false, 00:10:50.878 "nvme_io_md": false, 00:10:50.878 "write_zeroes": true, 00:10:50.878 "zcopy": true, 00:10:50.878 "get_zone_info": false, 00:10:50.878 "zone_management": false, 00:10:50.878 "zone_append": false, 00:10:50.878 "compare": false, 00:10:50.878 "compare_and_write": false, 00:10:50.878 "abort": true, 00:10:50.878 "seek_hole": false, 00:10:50.878 "seek_data": false, 00:10:50.878 "copy": true, 00:10:50.878 "nvme_iov_md": false 00:10:50.878 }, 00:10:50.878 "memory_domains": [ 00:10:50.878 { 00:10:50.878 "dma_device_id": "system", 00:10:50.878 "dma_device_type": 1 00:10:50.878 }, 00:10:50.878 { 00:10:50.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.878 "dma_device_type": 2 00:10:50.878 } 00:10:50.878 ], 00:10:50.878 "driver_specific": {} 00:10:50.878 } 00:10:50.878 ] 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.878 BaseBdev3 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.878 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.879 [ 00:10:50.879 { 00:10:50.879 "name": "BaseBdev3", 00:10:50.879 "aliases": [ 00:10:50.879 "b41994ed-b62b-42ed-ba04-033b01ac1aaf" 00:10:50.879 ], 00:10:50.879 "product_name": "Malloc disk", 00:10:50.879 "block_size": 512, 00:10:50.879 "num_blocks": 65536, 00:10:50.879 "uuid": "b41994ed-b62b-42ed-ba04-033b01ac1aaf", 00:10:50.879 "assigned_rate_limits": { 00:10:50.879 "rw_ios_per_sec": 0, 00:10:50.879 "rw_mbytes_per_sec": 0, 00:10:50.879 "r_mbytes_per_sec": 0, 00:10:50.879 "w_mbytes_per_sec": 0 00:10:50.879 }, 00:10:50.879 "claimed": false, 00:10:50.879 "zoned": false, 00:10:50.879 "supported_io_types": { 00:10:50.879 "read": true, 00:10:50.879 "write": true, 00:10:50.879 "unmap": true, 00:10:50.879 "flush": true, 00:10:50.879 "reset": true, 00:10:50.879 "nvme_admin": false, 00:10:50.879 "nvme_io": false, 00:10:50.879 "nvme_io_md": false, 00:10:50.879 "write_zeroes": true, 00:10:50.879 "zcopy": true, 00:10:50.879 "get_zone_info": false, 00:10:50.879 "zone_management": false, 00:10:50.879 "zone_append": false, 00:10:50.879 "compare": false, 00:10:50.879 "compare_and_write": false, 00:10:50.879 "abort": true, 00:10:50.879 "seek_hole": false, 00:10:50.879 "seek_data": false, 00:10:50.879 "copy": true, 00:10:50.879 "nvme_iov_md": false 00:10:50.879 }, 00:10:50.879 "memory_domains": [ 00:10:50.879 { 00:10:50.879 "dma_device_id": "system", 00:10:50.879 "dma_device_type": 1 00:10:50.879 }, 00:10:50.879 { 00:10:50.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.879 "dma_device_type": 2 00:10:50.879 } 00:10:50.879 ], 00:10:50.879 "driver_specific": {} 00:10:50.879 } 00:10:50.879 ] 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.879 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.142 BaseBdev4 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.142 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.142 [ 00:10:51.142 { 00:10:51.142 "name": "BaseBdev4", 00:10:51.142 "aliases": [ 00:10:51.142 "d5c88b80-0e5a-46e6-b7dd-d2864f800969" 00:10:51.142 ], 00:10:51.142 "product_name": "Malloc disk", 00:10:51.142 "block_size": 512, 00:10:51.142 "num_blocks": 65536, 00:10:51.142 "uuid": "d5c88b80-0e5a-46e6-b7dd-d2864f800969", 00:10:51.142 "assigned_rate_limits": { 00:10:51.142 "rw_ios_per_sec": 0, 00:10:51.142 "rw_mbytes_per_sec": 0, 00:10:51.142 "r_mbytes_per_sec": 0, 00:10:51.142 "w_mbytes_per_sec": 0 00:10:51.142 }, 00:10:51.142 "claimed": false, 00:10:51.142 "zoned": false, 00:10:51.142 "supported_io_types": { 00:10:51.142 "read": true, 00:10:51.142 "write": true, 00:10:51.142 "unmap": true, 00:10:51.142 "flush": true, 00:10:51.142 "reset": true, 00:10:51.142 "nvme_admin": false, 00:10:51.142 "nvme_io": false, 00:10:51.142 "nvme_io_md": false, 00:10:51.142 "write_zeroes": true, 00:10:51.142 "zcopy": true, 00:10:51.142 "get_zone_info": false, 00:10:51.142 "zone_management": false, 00:10:51.142 "zone_append": false, 00:10:51.142 "compare": false, 00:10:51.142 "compare_and_write": false, 00:10:51.142 "abort": true, 00:10:51.142 "seek_hole": false, 00:10:51.142 "seek_data": false, 00:10:51.142 "copy": true, 00:10:51.142 "nvme_iov_md": false 00:10:51.142 }, 00:10:51.142 "memory_domains": [ 00:10:51.142 { 00:10:51.142 "dma_device_id": "system", 00:10:51.142 "dma_device_type": 1 00:10:51.142 }, 00:10:51.142 { 00:10:51.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.142 "dma_device_type": 2 00:10:51.142 } 00:10:51.142 ], 00:10:51.142 "driver_specific": {} 00:10:51.142 } 00:10:51.142 ] 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.143 [2024-11-20 13:24:32.576967] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.143 [2024-11-20 13:24:32.577072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.143 [2024-11-20 13:24:32.577123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.143 [2024-11-20 13:24:32.579150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.143 [2024-11-20 13:24:32.579232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.143 "name": "Existed_Raid", 00:10:51.143 "uuid": "d416108b-4fff-4473-ba90-f574c25eef06", 00:10:51.143 "strip_size_kb": 0, 00:10:51.143 "state": "configuring", 00:10:51.143 "raid_level": "raid1", 00:10:51.143 "superblock": true, 00:10:51.143 "num_base_bdevs": 4, 00:10:51.143 "num_base_bdevs_discovered": 3, 00:10:51.143 "num_base_bdevs_operational": 4, 00:10:51.143 "base_bdevs_list": [ 00:10:51.143 { 00:10:51.143 "name": "BaseBdev1", 00:10:51.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.143 "is_configured": false, 00:10:51.143 "data_offset": 0, 00:10:51.143 "data_size": 0 00:10:51.143 }, 00:10:51.143 { 00:10:51.143 "name": "BaseBdev2", 00:10:51.143 "uuid": "a83c4d90-9113-42cc-afee-5a0d5abb2b07", 00:10:51.143 "is_configured": true, 00:10:51.143 "data_offset": 2048, 00:10:51.143 "data_size": 63488 00:10:51.143 }, 00:10:51.143 { 00:10:51.143 "name": "BaseBdev3", 00:10:51.143 "uuid": "b41994ed-b62b-42ed-ba04-033b01ac1aaf", 00:10:51.143 "is_configured": true, 00:10:51.143 "data_offset": 2048, 00:10:51.143 "data_size": 63488 00:10:51.143 }, 00:10:51.143 { 00:10:51.143 "name": "BaseBdev4", 00:10:51.143 "uuid": "d5c88b80-0e5a-46e6-b7dd-d2864f800969", 00:10:51.143 "is_configured": true, 00:10:51.143 "data_offset": 2048, 00:10:51.143 "data_size": 63488 00:10:51.143 } 00:10:51.143 ] 00:10:51.143 }' 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.143 13:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.402 [2024-11-20 13:24:33.040209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.402 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.661 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.661 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.661 "name": "Existed_Raid", 00:10:51.661 "uuid": "d416108b-4fff-4473-ba90-f574c25eef06", 00:10:51.661 "strip_size_kb": 0, 00:10:51.661 "state": "configuring", 00:10:51.661 "raid_level": "raid1", 00:10:51.661 "superblock": true, 00:10:51.661 "num_base_bdevs": 4, 00:10:51.661 "num_base_bdevs_discovered": 2, 00:10:51.661 "num_base_bdevs_operational": 4, 00:10:51.661 "base_bdevs_list": [ 00:10:51.661 { 00:10:51.661 "name": "BaseBdev1", 00:10:51.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.661 "is_configured": false, 00:10:51.661 "data_offset": 0, 00:10:51.661 "data_size": 0 00:10:51.661 }, 00:10:51.661 { 00:10:51.661 "name": null, 00:10:51.661 "uuid": "a83c4d90-9113-42cc-afee-5a0d5abb2b07", 00:10:51.661 "is_configured": false, 00:10:51.661 "data_offset": 0, 00:10:51.661 "data_size": 63488 00:10:51.661 }, 00:10:51.661 { 00:10:51.661 "name": "BaseBdev3", 00:10:51.661 "uuid": "b41994ed-b62b-42ed-ba04-033b01ac1aaf", 00:10:51.661 "is_configured": true, 00:10:51.661 "data_offset": 2048, 00:10:51.661 "data_size": 63488 00:10:51.661 }, 00:10:51.661 { 00:10:51.661 "name": "BaseBdev4", 00:10:51.661 "uuid": "d5c88b80-0e5a-46e6-b7dd-d2864f800969", 00:10:51.661 "is_configured": true, 00:10:51.661 "data_offset": 2048, 00:10:51.661 "data_size": 63488 00:10:51.661 } 00:10:51.661 ] 00:10:51.661 }' 00:10:51.661 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.661 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.920 [2024-11-20 13:24:33.530554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.920 BaseBdev1 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.920 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.920 [ 00:10:51.920 { 00:10:51.920 "name": "BaseBdev1", 00:10:51.920 "aliases": [ 00:10:51.920 "1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b" 00:10:51.920 ], 00:10:51.920 "product_name": "Malloc disk", 00:10:51.920 "block_size": 512, 00:10:51.920 "num_blocks": 65536, 00:10:51.920 "uuid": "1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b", 00:10:51.920 "assigned_rate_limits": { 00:10:51.920 "rw_ios_per_sec": 0, 00:10:51.920 "rw_mbytes_per_sec": 0, 00:10:51.920 "r_mbytes_per_sec": 0, 00:10:51.920 "w_mbytes_per_sec": 0 00:10:51.920 }, 00:10:51.920 "claimed": true, 00:10:51.920 "claim_type": "exclusive_write", 00:10:51.920 "zoned": false, 00:10:51.920 "supported_io_types": { 00:10:51.920 "read": true, 00:10:51.920 "write": true, 00:10:51.920 "unmap": true, 00:10:51.920 "flush": true, 00:10:51.920 "reset": true, 00:10:51.921 "nvme_admin": false, 00:10:51.921 "nvme_io": false, 00:10:51.921 "nvme_io_md": false, 00:10:51.921 "write_zeroes": true, 00:10:51.921 "zcopy": true, 00:10:51.921 "get_zone_info": false, 00:10:51.921 "zone_management": false, 00:10:51.921 "zone_append": false, 00:10:51.921 "compare": false, 00:10:51.921 "compare_and_write": false, 00:10:51.921 "abort": true, 00:10:51.921 "seek_hole": false, 00:10:51.921 "seek_data": false, 00:10:51.921 "copy": true, 00:10:51.921 "nvme_iov_md": false 00:10:51.921 }, 00:10:51.921 "memory_domains": [ 00:10:51.921 { 00:10:51.921 "dma_device_id": "system", 00:10:51.921 "dma_device_type": 1 00:10:51.921 }, 00:10:51.921 { 00:10:51.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.921 "dma_device_type": 2 00:10:51.921 } 00:10:51.921 ], 00:10:51.921 "driver_specific": {} 00:10:51.921 } 00:10:51.921 ] 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.921 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.179 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.179 "name": "Existed_Raid", 00:10:52.179 "uuid": "d416108b-4fff-4473-ba90-f574c25eef06", 00:10:52.179 "strip_size_kb": 0, 00:10:52.179 "state": "configuring", 00:10:52.179 "raid_level": "raid1", 00:10:52.179 "superblock": true, 00:10:52.180 "num_base_bdevs": 4, 00:10:52.180 "num_base_bdevs_discovered": 3, 00:10:52.180 "num_base_bdevs_operational": 4, 00:10:52.180 "base_bdevs_list": [ 00:10:52.180 { 00:10:52.180 "name": "BaseBdev1", 00:10:52.180 "uuid": "1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b", 00:10:52.180 "is_configured": true, 00:10:52.180 "data_offset": 2048, 00:10:52.180 "data_size": 63488 00:10:52.180 }, 00:10:52.180 { 00:10:52.180 "name": null, 00:10:52.180 "uuid": "a83c4d90-9113-42cc-afee-5a0d5abb2b07", 00:10:52.180 "is_configured": false, 00:10:52.180 "data_offset": 0, 00:10:52.180 "data_size": 63488 00:10:52.180 }, 00:10:52.180 { 00:10:52.180 "name": "BaseBdev3", 00:10:52.180 "uuid": "b41994ed-b62b-42ed-ba04-033b01ac1aaf", 00:10:52.180 "is_configured": true, 00:10:52.180 "data_offset": 2048, 00:10:52.180 "data_size": 63488 00:10:52.180 }, 00:10:52.180 { 00:10:52.180 "name": "BaseBdev4", 00:10:52.180 "uuid": "d5c88b80-0e5a-46e6-b7dd-d2864f800969", 00:10:52.180 "is_configured": true, 00:10:52.180 "data_offset": 2048, 00:10:52.180 "data_size": 63488 00:10:52.180 } 00:10:52.180 ] 00:10:52.180 }' 00:10:52.180 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.180 13:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.439 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.439 13:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.439 [2024-11-20 13:24:34.049750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.439 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.439 "name": "Existed_Raid", 00:10:52.439 "uuid": "d416108b-4fff-4473-ba90-f574c25eef06", 00:10:52.439 "strip_size_kb": 0, 00:10:52.439 "state": "configuring", 00:10:52.439 "raid_level": "raid1", 00:10:52.439 "superblock": true, 00:10:52.439 "num_base_bdevs": 4, 00:10:52.439 "num_base_bdevs_discovered": 2, 00:10:52.439 "num_base_bdevs_operational": 4, 00:10:52.439 "base_bdevs_list": [ 00:10:52.439 { 00:10:52.439 "name": "BaseBdev1", 00:10:52.439 "uuid": "1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b", 00:10:52.439 "is_configured": true, 00:10:52.439 "data_offset": 2048, 00:10:52.439 "data_size": 63488 00:10:52.439 }, 00:10:52.439 { 00:10:52.439 "name": null, 00:10:52.439 "uuid": "a83c4d90-9113-42cc-afee-5a0d5abb2b07", 00:10:52.439 "is_configured": false, 00:10:52.439 "data_offset": 0, 00:10:52.439 "data_size": 63488 00:10:52.439 }, 00:10:52.439 { 00:10:52.439 "name": null, 00:10:52.439 "uuid": "b41994ed-b62b-42ed-ba04-033b01ac1aaf", 00:10:52.439 "is_configured": false, 00:10:52.439 "data_offset": 0, 00:10:52.439 "data_size": 63488 00:10:52.439 }, 00:10:52.439 { 00:10:52.439 "name": "BaseBdev4", 00:10:52.439 "uuid": "d5c88b80-0e5a-46e6-b7dd-d2864f800969", 00:10:52.439 "is_configured": true, 00:10:52.439 "data_offset": 2048, 00:10:52.439 "data_size": 63488 00:10:52.439 } 00:10:52.439 ] 00:10:52.440 }' 00:10:52.440 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.440 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.009 [2024-11-20 13:24:34.544917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.009 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.009 "name": "Existed_Raid", 00:10:53.009 "uuid": "d416108b-4fff-4473-ba90-f574c25eef06", 00:10:53.009 "strip_size_kb": 0, 00:10:53.009 "state": "configuring", 00:10:53.009 "raid_level": "raid1", 00:10:53.009 "superblock": true, 00:10:53.009 "num_base_bdevs": 4, 00:10:53.009 "num_base_bdevs_discovered": 3, 00:10:53.009 "num_base_bdevs_operational": 4, 00:10:53.009 "base_bdevs_list": [ 00:10:53.009 { 00:10:53.009 "name": "BaseBdev1", 00:10:53.009 "uuid": "1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b", 00:10:53.009 "is_configured": true, 00:10:53.009 "data_offset": 2048, 00:10:53.009 "data_size": 63488 00:10:53.009 }, 00:10:53.009 { 00:10:53.009 "name": null, 00:10:53.009 "uuid": "a83c4d90-9113-42cc-afee-5a0d5abb2b07", 00:10:53.009 "is_configured": false, 00:10:53.009 "data_offset": 0, 00:10:53.009 "data_size": 63488 00:10:53.009 }, 00:10:53.009 { 00:10:53.009 "name": "BaseBdev3", 00:10:53.009 "uuid": "b41994ed-b62b-42ed-ba04-033b01ac1aaf", 00:10:53.009 "is_configured": true, 00:10:53.009 "data_offset": 2048, 00:10:53.009 "data_size": 63488 00:10:53.009 }, 00:10:53.009 { 00:10:53.009 "name": "BaseBdev4", 00:10:53.009 "uuid": "d5c88b80-0e5a-46e6-b7dd-d2864f800969", 00:10:53.009 "is_configured": true, 00:10:53.009 "data_offset": 2048, 00:10:53.009 "data_size": 63488 00:10:53.009 } 00:10:53.009 ] 00:10:53.009 }' 00:10:53.010 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.010 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.578 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.578 13:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:53.578 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.578 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.578 13:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.578 [2024-11-20 13:24:35.032158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.578 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.578 "name": "Existed_Raid", 00:10:53.578 "uuid": "d416108b-4fff-4473-ba90-f574c25eef06", 00:10:53.578 "strip_size_kb": 0, 00:10:53.578 "state": "configuring", 00:10:53.578 "raid_level": "raid1", 00:10:53.578 "superblock": true, 00:10:53.578 "num_base_bdevs": 4, 00:10:53.578 "num_base_bdevs_discovered": 2, 00:10:53.578 "num_base_bdevs_operational": 4, 00:10:53.578 "base_bdevs_list": [ 00:10:53.578 { 00:10:53.578 "name": null, 00:10:53.578 "uuid": "1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b", 00:10:53.578 "is_configured": false, 00:10:53.578 "data_offset": 0, 00:10:53.578 "data_size": 63488 00:10:53.578 }, 00:10:53.578 { 00:10:53.578 "name": null, 00:10:53.578 "uuid": "a83c4d90-9113-42cc-afee-5a0d5abb2b07", 00:10:53.578 "is_configured": false, 00:10:53.578 "data_offset": 0, 00:10:53.578 "data_size": 63488 00:10:53.578 }, 00:10:53.578 { 00:10:53.578 "name": "BaseBdev3", 00:10:53.578 "uuid": "b41994ed-b62b-42ed-ba04-033b01ac1aaf", 00:10:53.578 "is_configured": true, 00:10:53.578 "data_offset": 2048, 00:10:53.578 "data_size": 63488 00:10:53.578 }, 00:10:53.578 { 00:10:53.578 "name": "BaseBdev4", 00:10:53.578 "uuid": "d5c88b80-0e5a-46e6-b7dd-d2864f800969", 00:10:53.579 "is_configured": true, 00:10:53.579 "data_offset": 2048, 00:10:53.579 "data_size": 63488 00:10:53.579 } 00:10:53.579 ] 00:10:53.579 }' 00:10:53.579 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.579 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.838 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:53.838 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.838 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.838 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.838 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.097 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:54.097 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:54.097 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.097 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.097 [2024-11-20 13:24:35.513829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.097 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.098 "name": "Existed_Raid", 00:10:54.098 "uuid": "d416108b-4fff-4473-ba90-f574c25eef06", 00:10:54.098 "strip_size_kb": 0, 00:10:54.098 "state": "configuring", 00:10:54.098 "raid_level": "raid1", 00:10:54.098 "superblock": true, 00:10:54.098 "num_base_bdevs": 4, 00:10:54.098 "num_base_bdevs_discovered": 3, 00:10:54.098 "num_base_bdevs_operational": 4, 00:10:54.098 "base_bdevs_list": [ 00:10:54.098 { 00:10:54.098 "name": null, 00:10:54.098 "uuid": "1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b", 00:10:54.098 "is_configured": false, 00:10:54.098 "data_offset": 0, 00:10:54.098 "data_size": 63488 00:10:54.098 }, 00:10:54.098 { 00:10:54.098 "name": "BaseBdev2", 00:10:54.098 "uuid": "a83c4d90-9113-42cc-afee-5a0d5abb2b07", 00:10:54.098 "is_configured": true, 00:10:54.098 "data_offset": 2048, 00:10:54.098 "data_size": 63488 00:10:54.098 }, 00:10:54.098 { 00:10:54.098 "name": "BaseBdev3", 00:10:54.098 "uuid": "b41994ed-b62b-42ed-ba04-033b01ac1aaf", 00:10:54.098 "is_configured": true, 00:10:54.098 "data_offset": 2048, 00:10:54.098 "data_size": 63488 00:10:54.098 }, 00:10:54.098 { 00:10:54.098 "name": "BaseBdev4", 00:10:54.098 "uuid": "d5c88b80-0e5a-46e6-b7dd-d2864f800969", 00:10:54.098 "is_configured": true, 00:10:54.098 "data_offset": 2048, 00:10:54.098 "data_size": 63488 00:10:54.098 } 00:10:54.098 ] 00:10:54.098 }' 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.098 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.357 13:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.357 [2024-11-20 13:24:36.012009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:54.357 [2024-11-20 13:24:36.012308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:54.357 [2024-11-20 13:24:36.012329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:54.357 [2024-11-20 13:24:36.012591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:54.357 NewBaseBdev 00:10:54.357 [2024-11-20 13:24:36.012718] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:54.357 [2024-11-20 13:24:36.012727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:54.357 [2024-11-20 13:24:36.012829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.357 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.357 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:54.357 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:54.357 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.357 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.357 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.357 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.357 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.357 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.357 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.617 [ 00:10:54.617 { 00:10:54.617 "name": "NewBaseBdev", 00:10:54.617 "aliases": [ 00:10:54.617 "1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b" 00:10:54.617 ], 00:10:54.617 "product_name": "Malloc disk", 00:10:54.617 "block_size": 512, 00:10:54.617 "num_blocks": 65536, 00:10:54.617 "uuid": "1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b", 00:10:54.617 "assigned_rate_limits": { 00:10:54.617 "rw_ios_per_sec": 0, 00:10:54.617 "rw_mbytes_per_sec": 0, 00:10:54.617 "r_mbytes_per_sec": 0, 00:10:54.617 "w_mbytes_per_sec": 0 00:10:54.617 }, 00:10:54.617 "claimed": true, 00:10:54.617 "claim_type": "exclusive_write", 00:10:54.617 "zoned": false, 00:10:54.617 "supported_io_types": { 00:10:54.617 "read": true, 00:10:54.617 "write": true, 00:10:54.617 "unmap": true, 00:10:54.617 "flush": true, 00:10:54.617 "reset": true, 00:10:54.617 "nvme_admin": false, 00:10:54.617 "nvme_io": false, 00:10:54.617 "nvme_io_md": false, 00:10:54.617 "write_zeroes": true, 00:10:54.617 "zcopy": true, 00:10:54.617 "get_zone_info": false, 00:10:54.617 "zone_management": false, 00:10:54.617 "zone_append": false, 00:10:54.617 "compare": false, 00:10:54.617 "compare_and_write": false, 00:10:54.617 "abort": true, 00:10:54.617 "seek_hole": false, 00:10:54.617 "seek_data": false, 00:10:54.617 "copy": true, 00:10:54.617 "nvme_iov_md": false 00:10:54.617 }, 00:10:54.617 "memory_domains": [ 00:10:54.617 { 00:10:54.617 "dma_device_id": "system", 00:10:54.617 "dma_device_type": 1 00:10:54.617 }, 00:10:54.617 { 00:10:54.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.617 "dma_device_type": 2 00:10:54.617 } 00:10:54.617 ], 00:10:54.617 "driver_specific": {} 00:10:54.617 } 00:10:54.617 ] 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.617 "name": "Existed_Raid", 00:10:54.617 "uuid": "d416108b-4fff-4473-ba90-f574c25eef06", 00:10:54.617 "strip_size_kb": 0, 00:10:54.617 "state": "online", 00:10:54.617 "raid_level": "raid1", 00:10:54.617 "superblock": true, 00:10:54.617 "num_base_bdevs": 4, 00:10:54.617 "num_base_bdevs_discovered": 4, 00:10:54.617 "num_base_bdevs_operational": 4, 00:10:54.617 "base_bdevs_list": [ 00:10:54.617 { 00:10:54.617 "name": "NewBaseBdev", 00:10:54.617 "uuid": "1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b", 00:10:54.617 "is_configured": true, 00:10:54.617 "data_offset": 2048, 00:10:54.617 "data_size": 63488 00:10:54.617 }, 00:10:54.617 { 00:10:54.617 "name": "BaseBdev2", 00:10:54.617 "uuid": "a83c4d90-9113-42cc-afee-5a0d5abb2b07", 00:10:54.617 "is_configured": true, 00:10:54.617 "data_offset": 2048, 00:10:54.617 "data_size": 63488 00:10:54.617 }, 00:10:54.617 { 00:10:54.617 "name": "BaseBdev3", 00:10:54.617 "uuid": "b41994ed-b62b-42ed-ba04-033b01ac1aaf", 00:10:54.617 "is_configured": true, 00:10:54.617 "data_offset": 2048, 00:10:54.617 "data_size": 63488 00:10:54.617 }, 00:10:54.617 { 00:10:54.617 "name": "BaseBdev4", 00:10:54.617 "uuid": "d5c88b80-0e5a-46e6-b7dd-d2864f800969", 00:10:54.617 "is_configured": true, 00:10:54.617 "data_offset": 2048, 00:10:54.617 "data_size": 63488 00:10:54.617 } 00:10:54.617 ] 00:10:54.617 }' 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.617 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.877 [2024-11-20 13:24:36.475775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.877 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.877 "name": "Existed_Raid", 00:10:54.877 "aliases": [ 00:10:54.877 "d416108b-4fff-4473-ba90-f574c25eef06" 00:10:54.877 ], 00:10:54.877 "product_name": "Raid Volume", 00:10:54.877 "block_size": 512, 00:10:54.877 "num_blocks": 63488, 00:10:54.877 "uuid": "d416108b-4fff-4473-ba90-f574c25eef06", 00:10:54.877 "assigned_rate_limits": { 00:10:54.877 "rw_ios_per_sec": 0, 00:10:54.877 "rw_mbytes_per_sec": 0, 00:10:54.877 "r_mbytes_per_sec": 0, 00:10:54.877 "w_mbytes_per_sec": 0 00:10:54.877 }, 00:10:54.877 "claimed": false, 00:10:54.877 "zoned": false, 00:10:54.877 "supported_io_types": { 00:10:54.877 "read": true, 00:10:54.877 "write": true, 00:10:54.877 "unmap": false, 00:10:54.877 "flush": false, 00:10:54.877 "reset": true, 00:10:54.877 "nvme_admin": false, 00:10:54.877 "nvme_io": false, 00:10:54.877 "nvme_io_md": false, 00:10:54.877 "write_zeroes": true, 00:10:54.877 "zcopy": false, 00:10:54.877 "get_zone_info": false, 00:10:54.877 "zone_management": false, 00:10:54.877 "zone_append": false, 00:10:54.877 "compare": false, 00:10:54.877 "compare_and_write": false, 00:10:54.877 "abort": false, 00:10:54.877 "seek_hole": false, 00:10:54.877 "seek_data": false, 00:10:54.877 "copy": false, 00:10:54.877 "nvme_iov_md": false 00:10:54.877 }, 00:10:54.877 "memory_domains": [ 00:10:54.877 { 00:10:54.877 "dma_device_id": "system", 00:10:54.877 "dma_device_type": 1 00:10:54.877 }, 00:10:54.877 { 00:10:54.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.877 "dma_device_type": 2 00:10:54.877 }, 00:10:54.877 { 00:10:54.877 "dma_device_id": "system", 00:10:54.877 "dma_device_type": 1 00:10:54.877 }, 00:10:54.877 { 00:10:54.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.877 "dma_device_type": 2 00:10:54.877 }, 00:10:54.877 { 00:10:54.877 "dma_device_id": "system", 00:10:54.877 "dma_device_type": 1 00:10:54.877 }, 00:10:54.877 { 00:10:54.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.877 "dma_device_type": 2 00:10:54.877 }, 00:10:54.877 { 00:10:54.878 "dma_device_id": "system", 00:10:54.878 "dma_device_type": 1 00:10:54.878 }, 00:10:54.878 { 00:10:54.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.878 "dma_device_type": 2 00:10:54.878 } 00:10:54.878 ], 00:10:54.878 "driver_specific": { 00:10:54.878 "raid": { 00:10:54.878 "uuid": "d416108b-4fff-4473-ba90-f574c25eef06", 00:10:54.878 "strip_size_kb": 0, 00:10:54.878 "state": "online", 00:10:54.878 "raid_level": "raid1", 00:10:54.878 "superblock": true, 00:10:54.878 "num_base_bdevs": 4, 00:10:54.878 "num_base_bdevs_discovered": 4, 00:10:54.878 "num_base_bdevs_operational": 4, 00:10:54.878 "base_bdevs_list": [ 00:10:54.878 { 00:10:54.878 "name": "NewBaseBdev", 00:10:54.878 "uuid": "1ddcfb43-e784-426c-a5d0-0e1dcac7bf9b", 00:10:54.878 "is_configured": true, 00:10:54.878 "data_offset": 2048, 00:10:54.878 "data_size": 63488 00:10:54.878 }, 00:10:54.878 { 00:10:54.878 "name": "BaseBdev2", 00:10:54.878 "uuid": "a83c4d90-9113-42cc-afee-5a0d5abb2b07", 00:10:54.878 "is_configured": true, 00:10:54.878 "data_offset": 2048, 00:10:54.878 "data_size": 63488 00:10:54.878 }, 00:10:54.878 { 00:10:54.878 "name": "BaseBdev3", 00:10:54.878 "uuid": "b41994ed-b62b-42ed-ba04-033b01ac1aaf", 00:10:54.878 "is_configured": true, 00:10:54.878 "data_offset": 2048, 00:10:54.878 "data_size": 63488 00:10:54.878 }, 00:10:54.878 { 00:10:54.878 "name": "BaseBdev4", 00:10:54.878 "uuid": "d5c88b80-0e5a-46e6-b7dd-d2864f800969", 00:10:54.878 "is_configured": true, 00:10:54.878 "data_offset": 2048, 00:10:54.878 "data_size": 63488 00:10:54.878 } 00:10:54.878 ] 00:10:54.878 } 00:10:54.878 } 00:10:54.878 }' 00:10:54.878 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:55.137 BaseBdev2 00:10:55.137 BaseBdev3 00:10:55.137 BaseBdev4' 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.137 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.138 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.397 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.397 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.397 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.397 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.397 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.397 [2024-11-20 13:24:36.822767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.397 [2024-11-20 13:24:36.822813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.397 [2024-11-20 13:24:36.822919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.398 [2024-11-20 13:24:36.823216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.398 [2024-11-20 13:24:36.823233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84321 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84321 ']' 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84321 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84321 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84321' 00:10:55.398 killing process with pid 84321 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84321 00:10:55.398 [2024-11-20 13:24:36.864621] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.398 13:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84321 00:10:55.398 [2024-11-20 13:24:36.910011] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.657 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:55.657 ************************************ 00:10:55.657 END TEST raid_state_function_test_sb 00:10:55.657 ************************************ 00:10:55.657 00:10:55.657 real 0m9.475s 00:10:55.657 user 0m16.333s 00:10:55.657 sys 0m1.892s 00:10:55.657 13:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.657 13:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.657 13:24:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:55.657 13:24:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.657 13:24:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.657 13:24:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.657 ************************************ 00:10:55.657 START TEST raid_superblock_test 00:10:55.657 ************************************ 00:10:55.657 13:24:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:10:55.657 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:55.657 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:55.657 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:55.657 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84970 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84970 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:55.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84970 ']' 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.658 13:24:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.658 [2024-11-20 13:24:37.295008] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:55.658 [2024-11-20 13:24:37.295358] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84970 ] 00:10:55.916 [2024-11-20 13:24:37.451371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.916 [2024-11-20 13:24:37.477644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.916 [2024-11-20 13:24:37.521049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.916 [2024-11-20 13:24:37.521169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.484 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.744 malloc1 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.744 [2024-11-20 13:24:38.164423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:56.744 [2024-11-20 13:24:38.164483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.744 [2024-11-20 13:24:38.164501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:56.744 [2024-11-20 13:24:38.164515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.744 [2024-11-20 13:24:38.166650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.744 [2024-11-20 13:24:38.166728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:56.744 pt1 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.744 malloc2 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.744 [2024-11-20 13:24:38.193241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:56.744 [2024-11-20 13:24:38.193346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.744 [2024-11-20 13:24:38.193377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:56.744 [2024-11-20 13:24:38.193407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.744 [2024-11-20 13:24:38.195505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.744 [2024-11-20 13:24:38.195573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:56.744 pt2 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.744 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.745 malloc3 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.745 [2024-11-20 13:24:38.225943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:56.745 [2024-11-20 13:24:38.226055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.745 [2024-11-20 13:24:38.226111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:56.745 [2024-11-20 13:24:38.226141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.745 [2024-11-20 13:24:38.228269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.745 [2024-11-20 13:24:38.228358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:56.745 pt3 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.745 malloc4 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.745 [2024-11-20 13:24:38.264506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:56.745 [2024-11-20 13:24:38.264610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.745 [2024-11-20 13:24:38.264630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:56.745 [2024-11-20 13:24:38.264644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.745 [2024-11-20 13:24:38.266944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.745 [2024-11-20 13:24:38.266983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:56.745 pt4 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.745 [2024-11-20 13:24:38.276519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:56.745 [2024-11-20 13:24:38.278321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.745 [2024-11-20 13:24:38.278389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:56.745 [2024-11-20 13:24:38.278433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:56.745 [2024-11-20 13:24:38.278594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:56.745 [2024-11-20 13:24:38.278607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:56.745 [2024-11-20 13:24:38.278834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:56.745 [2024-11-20 13:24:38.278988] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:56.745 [2024-11-20 13:24:38.278997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:56.745 [2024-11-20 13:24:38.279152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.745 "name": "raid_bdev1", 00:10:56.745 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:10:56.745 "strip_size_kb": 0, 00:10:56.745 "state": "online", 00:10:56.745 "raid_level": "raid1", 00:10:56.745 "superblock": true, 00:10:56.745 "num_base_bdevs": 4, 00:10:56.745 "num_base_bdevs_discovered": 4, 00:10:56.745 "num_base_bdevs_operational": 4, 00:10:56.745 "base_bdevs_list": [ 00:10:56.745 { 00:10:56.745 "name": "pt1", 00:10:56.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.745 "is_configured": true, 00:10:56.745 "data_offset": 2048, 00:10:56.745 "data_size": 63488 00:10:56.745 }, 00:10:56.745 { 00:10:56.745 "name": "pt2", 00:10:56.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.745 "is_configured": true, 00:10:56.745 "data_offset": 2048, 00:10:56.745 "data_size": 63488 00:10:56.745 }, 00:10:56.745 { 00:10:56.745 "name": "pt3", 00:10:56.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.745 "is_configured": true, 00:10:56.745 "data_offset": 2048, 00:10:56.745 "data_size": 63488 00:10:56.745 }, 00:10:56.745 { 00:10:56.745 "name": "pt4", 00:10:56.745 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:56.745 "is_configured": true, 00:10:56.745 "data_offset": 2048, 00:10:56.745 "data_size": 63488 00:10:56.745 } 00:10:56.745 ] 00:10:56.745 }' 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.745 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.318 [2024-11-20 13:24:38.704137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.318 "name": "raid_bdev1", 00:10:57.318 "aliases": [ 00:10:57.318 "f358dff6-216c-4ce7-83ac-1ff7b1414a89" 00:10:57.318 ], 00:10:57.318 "product_name": "Raid Volume", 00:10:57.318 "block_size": 512, 00:10:57.318 "num_blocks": 63488, 00:10:57.318 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:10:57.318 "assigned_rate_limits": { 00:10:57.318 "rw_ios_per_sec": 0, 00:10:57.318 "rw_mbytes_per_sec": 0, 00:10:57.318 "r_mbytes_per_sec": 0, 00:10:57.318 "w_mbytes_per_sec": 0 00:10:57.318 }, 00:10:57.318 "claimed": false, 00:10:57.318 "zoned": false, 00:10:57.318 "supported_io_types": { 00:10:57.318 "read": true, 00:10:57.318 "write": true, 00:10:57.318 "unmap": false, 00:10:57.318 "flush": false, 00:10:57.318 "reset": true, 00:10:57.318 "nvme_admin": false, 00:10:57.318 "nvme_io": false, 00:10:57.318 "nvme_io_md": false, 00:10:57.318 "write_zeroes": true, 00:10:57.318 "zcopy": false, 00:10:57.318 "get_zone_info": false, 00:10:57.318 "zone_management": false, 00:10:57.318 "zone_append": false, 00:10:57.318 "compare": false, 00:10:57.318 "compare_and_write": false, 00:10:57.318 "abort": false, 00:10:57.318 "seek_hole": false, 00:10:57.318 "seek_data": false, 00:10:57.318 "copy": false, 00:10:57.318 "nvme_iov_md": false 00:10:57.318 }, 00:10:57.318 "memory_domains": [ 00:10:57.318 { 00:10:57.318 "dma_device_id": "system", 00:10:57.318 "dma_device_type": 1 00:10:57.318 }, 00:10:57.318 { 00:10:57.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.318 "dma_device_type": 2 00:10:57.318 }, 00:10:57.318 { 00:10:57.318 "dma_device_id": "system", 00:10:57.318 "dma_device_type": 1 00:10:57.318 }, 00:10:57.318 { 00:10:57.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.318 "dma_device_type": 2 00:10:57.318 }, 00:10:57.318 { 00:10:57.318 "dma_device_id": "system", 00:10:57.318 "dma_device_type": 1 00:10:57.318 }, 00:10:57.318 { 00:10:57.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.318 "dma_device_type": 2 00:10:57.318 }, 00:10:57.318 { 00:10:57.318 "dma_device_id": "system", 00:10:57.318 "dma_device_type": 1 00:10:57.318 }, 00:10:57.318 { 00:10:57.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.318 "dma_device_type": 2 00:10:57.318 } 00:10:57.318 ], 00:10:57.318 "driver_specific": { 00:10:57.318 "raid": { 00:10:57.318 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:10:57.318 "strip_size_kb": 0, 00:10:57.318 "state": "online", 00:10:57.318 "raid_level": "raid1", 00:10:57.318 "superblock": true, 00:10:57.318 "num_base_bdevs": 4, 00:10:57.318 "num_base_bdevs_discovered": 4, 00:10:57.318 "num_base_bdevs_operational": 4, 00:10:57.318 "base_bdevs_list": [ 00:10:57.318 { 00:10:57.318 "name": "pt1", 00:10:57.318 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.318 "is_configured": true, 00:10:57.318 "data_offset": 2048, 00:10:57.318 "data_size": 63488 00:10:57.318 }, 00:10:57.318 { 00:10:57.318 "name": "pt2", 00:10:57.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.318 "is_configured": true, 00:10:57.318 "data_offset": 2048, 00:10:57.318 "data_size": 63488 00:10:57.318 }, 00:10:57.318 { 00:10:57.318 "name": "pt3", 00:10:57.318 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.318 "is_configured": true, 00:10:57.318 "data_offset": 2048, 00:10:57.318 "data_size": 63488 00:10:57.318 }, 00:10:57.318 { 00:10:57.318 "name": "pt4", 00:10:57.318 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.318 "is_configured": true, 00:10:57.318 "data_offset": 2048, 00:10:57.318 "data_size": 63488 00:10:57.318 } 00:10:57.318 ] 00:10:57.318 } 00:10:57.318 } 00:10:57.318 }' 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:57.318 pt2 00:10:57.318 pt3 00:10:57.318 pt4' 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.318 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.578 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.578 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.578 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.578 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:57.578 13:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.578 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.578 13:24:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.578 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.578 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.578 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.578 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.578 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:57.578 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.578 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.578 [2024-11-20 13:24:39.047513] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.578 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.578 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f358dff6-216c-4ce7-83ac-1ff7b1414a89 00:10:57.578 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f358dff6-216c-4ce7-83ac-1ff7b1414a89 ']' 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.579 [2024-11-20 13:24:39.095131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.579 [2024-11-20 13:24:39.095167] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.579 [2024-11-20 13:24:39.095249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.579 [2024-11-20 13:24:39.095340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.579 [2024-11-20 13:24:39.095351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:57.579 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.839 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:57.839 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.839 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:57.839 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.839 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:57.839 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.839 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:57.839 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.839 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.839 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.839 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.839 [2024-11-20 13:24:39.258861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:57.839 [2024-11-20 13:24:39.260938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:57.839 [2024-11-20 13:24:39.260986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:57.839 [2024-11-20 13:24:39.261029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:57.839 [2024-11-20 13:24:39.261079] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:57.839 [2024-11-20 13:24:39.261140] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:57.839 [2024-11-20 13:24:39.261164] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:57.839 [2024-11-20 13:24:39.261180] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:57.839 [2024-11-20 13:24:39.261195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.839 [2024-11-20 13:24:39.261204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:57.839 request: 00:10:57.839 { 00:10:57.839 "name": "raid_bdev1", 00:10:57.839 "raid_level": "raid1", 00:10:57.839 "base_bdevs": [ 00:10:57.839 "malloc1", 00:10:57.839 "malloc2", 00:10:57.839 "malloc3", 00:10:57.839 "malloc4" 00:10:57.839 ], 00:10:57.839 "superblock": false, 00:10:57.839 "method": "bdev_raid_create", 00:10:57.839 "req_id": 1 00:10:57.839 } 00:10:57.840 Got JSON-RPC error response 00:10:57.840 response: 00:10:57.840 { 00:10:57.840 "code": -17, 00:10:57.840 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:57.840 } 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.840 [2024-11-20 13:24:39.326708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:57.840 [2024-11-20 13:24:39.326762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.840 [2024-11-20 13:24:39.326784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:57.840 [2024-11-20 13:24:39.326793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.840 [2024-11-20 13:24:39.329180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.840 [2024-11-20 13:24:39.329214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:57.840 [2024-11-20 13:24:39.329293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:57.840 [2024-11-20 13:24:39.329333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:57.840 pt1 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.840 "name": "raid_bdev1", 00:10:57.840 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:10:57.840 "strip_size_kb": 0, 00:10:57.840 "state": "configuring", 00:10:57.840 "raid_level": "raid1", 00:10:57.840 "superblock": true, 00:10:57.840 "num_base_bdevs": 4, 00:10:57.840 "num_base_bdevs_discovered": 1, 00:10:57.840 "num_base_bdevs_operational": 4, 00:10:57.840 "base_bdevs_list": [ 00:10:57.840 { 00:10:57.840 "name": "pt1", 00:10:57.840 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.840 "is_configured": true, 00:10:57.840 "data_offset": 2048, 00:10:57.840 "data_size": 63488 00:10:57.840 }, 00:10:57.840 { 00:10:57.840 "name": null, 00:10:57.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.840 "is_configured": false, 00:10:57.840 "data_offset": 2048, 00:10:57.840 "data_size": 63488 00:10:57.840 }, 00:10:57.840 { 00:10:57.840 "name": null, 00:10:57.840 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.840 "is_configured": false, 00:10:57.840 "data_offset": 2048, 00:10:57.840 "data_size": 63488 00:10:57.840 }, 00:10:57.840 { 00:10:57.840 "name": null, 00:10:57.840 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.840 "is_configured": false, 00:10:57.840 "data_offset": 2048, 00:10:57.840 "data_size": 63488 00:10:57.840 } 00:10:57.840 ] 00:10:57.840 }' 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.840 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.411 [2024-11-20 13:24:39.797945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.411 [2024-11-20 13:24:39.798104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.411 [2024-11-20 13:24:39.798134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:58.411 [2024-11-20 13:24:39.798145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.411 [2024-11-20 13:24:39.798597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.411 [2024-11-20 13:24:39.798617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.411 [2024-11-20 13:24:39.798700] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.411 [2024-11-20 13:24:39.798723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.411 pt2 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.411 [2024-11-20 13:24:39.809947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.411 "name": "raid_bdev1", 00:10:58.411 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:10:58.411 "strip_size_kb": 0, 00:10:58.411 "state": "configuring", 00:10:58.411 "raid_level": "raid1", 00:10:58.411 "superblock": true, 00:10:58.411 "num_base_bdevs": 4, 00:10:58.411 "num_base_bdevs_discovered": 1, 00:10:58.411 "num_base_bdevs_operational": 4, 00:10:58.411 "base_bdevs_list": [ 00:10:58.411 { 00:10:58.411 "name": "pt1", 00:10:58.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.411 "is_configured": true, 00:10:58.411 "data_offset": 2048, 00:10:58.411 "data_size": 63488 00:10:58.411 }, 00:10:58.411 { 00:10:58.411 "name": null, 00:10:58.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.411 "is_configured": false, 00:10:58.411 "data_offset": 0, 00:10:58.411 "data_size": 63488 00:10:58.411 }, 00:10:58.411 { 00:10:58.411 "name": null, 00:10:58.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.411 "is_configured": false, 00:10:58.411 "data_offset": 2048, 00:10:58.411 "data_size": 63488 00:10:58.411 }, 00:10:58.411 { 00:10:58.411 "name": null, 00:10:58.411 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.411 "is_configured": false, 00:10:58.411 "data_offset": 2048, 00:10:58.411 "data_size": 63488 00:10:58.411 } 00:10:58.411 ] 00:10:58.411 }' 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.411 13:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.671 [2024-11-20 13:24:40.293144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.671 [2024-11-20 13:24:40.293287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.671 [2024-11-20 13:24:40.293340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:58.671 [2024-11-20 13:24:40.293376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.671 [2024-11-20 13:24:40.293852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.671 [2024-11-20 13:24:40.293917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.671 [2024-11-20 13:24:40.294044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.671 [2024-11-20 13:24:40.294104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.671 pt2 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.671 [2024-11-20 13:24:40.305064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:58.671 [2024-11-20 13:24:40.305144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.671 [2024-11-20 13:24:40.305175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:58.671 [2024-11-20 13:24:40.305233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.671 [2024-11-20 13:24:40.305678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.671 [2024-11-20 13:24:40.305742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:58.671 [2024-11-20 13:24:40.305838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:58.671 [2024-11-20 13:24:40.305892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:58.671 pt3 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.671 [2024-11-20 13:24:40.317042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:58.671 [2024-11-20 13:24:40.317118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.671 [2024-11-20 13:24:40.317134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:58.671 [2024-11-20 13:24:40.317144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.671 [2024-11-20 13:24:40.317501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.671 [2024-11-20 13:24:40.317526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:58.671 [2024-11-20 13:24:40.317593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:58.671 [2024-11-20 13:24:40.317616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:58.671 [2024-11-20 13:24:40.317741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:58.671 [2024-11-20 13:24:40.317764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:58.671 [2024-11-20 13:24:40.318029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:58.671 [2024-11-20 13:24:40.318176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:58.671 [2024-11-20 13:24:40.318190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:58.671 [2024-11-20 13:24:40.318306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.671 pt4 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.671 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.672 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.672 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.672 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.672 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.672 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.672 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.672 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.959 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.959 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.959 "name": "raid_bdev1", 00:10:58.959 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:10:58.959 "strip_size_kb": 0, 00:10:58.959 "state": "online", 00:10:58.959 "raid_level": "raid1", 00:10:58.959 "superblock": true, 00:10:58.959 "num_base_bdevs": 4, 00:10:58.959 "num_base_bdevs_discovered": 4, 00:10:58.959 "num_base_bdevs_operational": 4, 00:10:58.959 "base_bdevs_list": [ 00:10:58.959 { 00:10:58.959 "name": "pt1", 00:10:58.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.959 "is_configured": true, 00:10:58.959 "data_offset": 2048, 00:10:58.959 "data_size": 63488 00:10:58.959 }, 00:10:58.959 { 00:10:58.959 "name": "pt2", 00:10:58.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.959 "is_configured": true, 00:10:58.959 "data_offset": 2048, 00:10:58.959 "data_size": 63488 00:10:58.959 }, 00:10:58.959 { 00:10:58.959 "name": "pt3", 00:10:58.959 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.959 "is_configured": true, 00:10:58.960 "data_offset": 2048, 00:10:58.960 "data_size": 63488 00:10:58.960 }, 00:10:58.960 { 00:10:58.960 "name": "pt4", 00:10:58.960 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.960 "is_configured": true, 00:10:58.960 "data_offset": 2048, 00:10:58.960 "data_size": 63488 00:10:58.960 } 00:10:58.960 ] 00:10:58.960 }' 00:10:58.960 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.960 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.225 [2024-11-20 13:24:40.772605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.225 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.225 "name": "raid_bdev1", 00:10:59.225 "aliases": [ 00:10:59.225 "f358dff6-216c-4ce7-83ac-1ff7b1414a89" 00:10:59.225 ], 00:10:59.225 "product_name": "Raid Volume", 00:10:59.225 "block_size": 512, 00:10:59.225 "num_blocks": 63488, 00:10:59.225 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:10:59.225 "assigned_rate_limits": { 00:10:59.225 "rw_ios_per_sec": 0, 00:10:59.225 "rw_mbytes_per_sec": 0, 00:10:59.225 "r_mbytes_per_sec": 0, 00:10:59.225 "w_mbytes_per_sec": 0 00:10:59.225 }, 00:10:59.225 "claimed": false, 00:10:59.225 "zoned": false, 00:10:59.225 "supported_io_types": { 00:10:59.225 "read": true, 00:10:59.225 "write": true, 00:10:59.225 "unmap": false, 00:10:59.225 "flush": false, 00:10:59.225 "reset": true, 00:10:59.225 "nvme_admin": false, 00:10:59.225 "nvme_io": false, 00:10:59.225 "nvme_io_md": false, 00:10:59.225 "write_zeroes": true, 00:10:59.225 "zcopy": false, 00:10:59.225 "get_zone_info": false, 00:10:59.225 "zone_management": false, 00:10:59.225 "zone_append": false, 00:10:59.225 "compare": false, 00:10:59.225 "compare_and_write": false, 00:10:59.225 "abort": false, 00:10:59.225 "seek_hole": false, 00:10:59.225 "seek_data": false, 00:10:59.225 "copy": false, 00:10:59.225 "nvme_iov_md": false 00:10:59.225 }, 00:10:59.225 "memory_domains": [ 00:10:59.225 { 00:10:59.225 "dma_device_id": "system", 00:10:59.225 "dma_device_type": 1 00:10:59.225 }, 00:10:59.225 { 00:10:59.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.225 "dma_device_type": 2 00:10:59.225 }, 00:10:59.225 { 00:10:59.225 "dma_device_id": "system", 00:10:59.225 "dma_device_type": 1 00:10:59.225 }, 00:10:59.225 { 00:10:59.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.225 "dma_device_type": 2 00:10:59.225 }, 00:10:59.225 { 00:10:59.225 "dma_device_id": "system", 00:10:59.225 "dma_device_type": 1 00:10:59.225 }, 00:10:59.225 { 00:10:59.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.225 "dma_device_type": 2 00:10:59.225 }, 00:10:59.225 { 00:10:59.225 "dma_device_id": "system", 00:10:59.225 "dma_device_type": 1 00:10:59.225 }, 00:10:59.225 { 00:10:59.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.225 "dma_device_type": 2 00:10:59.225 } 00:10:59.225 ], 00:10:59.225 "driver_specific": { 00:10:59.225 "raid": { 00:10:59.225 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:10:59.225 "strip_size_kb": 0, 00:10:59.225 "state": "online", 00:10:59.225 "raid_level": "raid1", 00:10:59.225 "superblock": true, 00:10:59.225 "num_base_bdevs": 4, 00:10:59.225 "num_base_bdevs_discovered": 4, 00:10:59.225 "num_base_bdevs_operational": 4, 00:10:59.225 "base_bdevs_list": [ 00:10:59.225 { 00:10:59.225 "name": "pt1", 00:10:59.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.225 "is_configured": true, 00:10:59.225 "data_offset": 2048, 00:10:59.225 "data_size": 63488 00:10:59.225 }, 00:10:59.225 { 00:10:59.225 "name": "pt2", 00:10:59.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.225 "is_configured": true, 00:10:59.225 "data_offset": 2048, 00:10:59.225 "data_size": 63488 00:10:59.225 }, 00:10:59.225 { 00:10:59.225 "name": "pt3", 00:10:59.225 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.225 "is_configured": true, 00:10:59.225 "data_offset": 2048, 00:10:59.225 "data_size": 63488 00:10:59.226 }, 00:10:59.226 { 00:10:59.226 "name": "pt4", 00:10:59.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:59.226 "is_configured": true, 00:10:59.226 "data_offset": 2048, 00:10:59.226 "data_size": 63488 00:10:59.226 } 00:10:59.226 ] 00:10:59.226 } 00:10:59.226 } 00:10:59.226 }' 00:10:59.226 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.226 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:59.226 pt2 00:10:59.226 pt3 00:10:59.226 pt4' 00:10:59.226 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.226 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.226 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.226 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:59.226 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.226 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.226 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.486 13:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:59.486 [2024-11-20 13:24:41.048186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f358dff6-216c-4ce7-83ac-1ff7b1414a89 '!=' f358dff6-216c-4ce7-83ac-1ff7b1414a89 ']' 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.486 [2024-11-20 13:24:41.095826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.486 "name": "raid_bdev1", 00:10:59.486 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:10:59.486 "strip_size_kb": 0, 00:10:59.486 "state": "online", 00:10:59.486 "raid_level": "raid1", 00:10:59.486 "superblock": true, 00:10:59.486 "num_base_bdevs": 4, 00:10:59.486 "num_base_bdevs_discovered": 3, 00:10:59.486 "num_base_bdevs_operational": 3, 00:10:59.486 "base_bdevs_list": [ 00:10:59.486 { 00:10:59.486 "name": null, 00:10:59.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.486 "is_configured": false, 00:10:59.486 "data_offset": 0, 00:10:59.486 "data_size": 63488 00:10:59.486 }, 00:10:59.486 { 00:10:59.486 "name": "pt2", 00:10:59.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.486 "is_configured": true, 00:10:59.486 "data_offset": 2048, 00:10:59.486 "data_size": 63488 00:10:59.486 }, 00:10:59.486 { 00:10:59.486 "name": "pt3", 00:10:59.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.486 "is_configured": true, 00:10:59.486 "data_offset": 2048, 00:10:59.486 "data_size": 63488 00:10:59.486 }, 00:10:59.486 { 00:10:59.486 "name": "pt4", 00:10:59.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:59.486 "is_configured": true, 00:10:59.486 "data_offset": 2048, 00:10:59.486 "data_size": 63488 00:10:59.486 } 00:10:59.486 ] 00:10:59.486 }' 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.486 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.057 [2024-11-20 13:24:41.495162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.057 [2024-11-20 13:24:41.495207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.057 [2024-11-20 13:24:41.495301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.057 [2024-11-20 13:24:41.495379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.057 [2024-11-20 13:24:41.495410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.057 [2024-11-20 13:24:41.579028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.057 [2024-11-20 13:24:41.579090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.057 [2024-11-20 13:24:41.579109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:00.057 [2024-11-20 13:24:41.579120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.057 [2024-11-20 13:24:41.581519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.057 [2024-11-20 13:24:41.581564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.057 [2024-11-20 13:24:41.581642] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:00.057 [2024-11-20 13:24:41.581682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.057 pt2 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.057 "name": "raid_bdev1", 00:11:00.057 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:11:00.057 "strip_size_kb": 0, 00:11:00.057 "state": "configuring", 00:11:00.057 "raid_level": "raid1", 00:11:00.057 "superblock": true, 00:11:00.057 "num_base_bdevs": 4, 00:11:00.057 "num_base_bdevs_discovered": 1, 00:11:00.057 "num_base_bdevs_operational": 3, 00:11:00.057 "base_bdevs_list": [ 00:11:00.057 { 00:11:00.057 "name": null, 00:11:00.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.057 "is_configured": false, 00:11:00.057 "data_offset": 2048, 00:11:00.057 "data_size": 63488 00:11:00.057 }, 00:11:00.057 { 00:11:00.057 "name": "pt2", 00:11:00.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.057 "is_configured": true, 00:11:00.057 "data_offset": 2048, 00:11:00.057 "data_size": 63488 00:11:00.057 }, 00:11:00.057 { 00:11:00.057 "name": null, 00:11:00.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.057 "is_configured": false, 00:11:00.057 "data_offset": 2048, 00:11:00.057 "data_size": 63488 00:11:00.057 }, 00:11:00.057 { 00:11:00.057 "name": null, 00:11:00.057 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.057 "is_configured": false, 00:11:00.057 "data_offset": 2048, 00:11:00.057 "data_size": 63488 00:11:00.057 } 00:11:00.057 ] 00:11:00.057 }' 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.057 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.628 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:00.628 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:00.628 13:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:00.628 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.628 13:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.628 [2024-11-20 13:24:41.998350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:00.628 [2024-11-20 13:24:41.998430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.628 [2024-11-20 13:24:41.998453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:00.628 [2024-11-20 13:24:41.998466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.628 [2024-11-20 13:24:41.998867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.628 [2024-11-20 13:24:41.998895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:00.628 [2024-11-20 13:24:41.998974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:00.628 [2024-11-20 13:24:41.999037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:00.628 pt3 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.628 "name": "raid_bdev1", 00:11:00.628 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:11:00.628 "strip_size_kb": 0, 00:11:00.628 "state": "configuring", 00:11:00.628 "raid_level": "raid1", 00:11:00.628 "superblock": true, 00:11:00.628 "num_base_bdevs": 4, 00:11:00.628 "num_base_bdevs_discovered": 2, 00:11:00.628 "num_base_bdevs_operational": 3, 00:11:00.628 "base_bdevs_list": [ 00:11:00.628 { 00:11:00.628 "name": null, 00:11:00.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.628 "is_configured": false, 00:11:00.628 "data_offset": 2048, 00:11:00.628 "data_size": 63488 00:11:00.628 }, 00:11:00.628 { 00:11:00.628 "name": "pt2", 00:11:00.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.628 "is_configured": true, 00:11:00.628 "data_offset": 2048, 00:11:00.628 "data_size": 63488 00:11:00.628 }, 00:11:00.628 { 00:11:00.628 "name": "pt3", 00:11:00.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.628 "is_configured": true, 00:11:00.628 "data_offset": 2048, 00:11:00.628 "data_size": 63488 00:11:00.628 }, 00:11:00.628 { 00:11:00.628 "name": null, 00:11:00.628 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.628 "is_configured": false, 00:11:00.628 "data_offset": 2048, 00:11:00.628 "data_size": 63488 00:11:00.628 } 00:11:00.628 ] 00:11:00.628 }' 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.628 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.888 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:00.888 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:00.888 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:00.888 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:00.888 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.888 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.888 [2024-11-20 13:24:42.441588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:00.888 [2024-11-20 13:24:42.441653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.888 [2024-11-20 13:24:42.441672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:00.888 [2024-11-20 13:24:42.441684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.888 [2024-11-20 13:24:42.442161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.888 [2024-11-20 13:24:42.442195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:00.888 [2024-11-20 13:24:42.442277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:00.888 [2024-11-20 13:24:42.442307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:00.888 [2024-11-20 13:24:42.442414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:00.888 [2024-11-20 13:24:42.442430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:00.888 [2024-11-20 13:24:42.442688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:00.888 [2024-11-20 13:24:42.442832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:00.888 [2024-11-20 13:24:42.442847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:11:00.888 [2024-11-20 13:24:42.442967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.888 pt4 00:11:00.888 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.888 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:00.888 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.888 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.888 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.889 "name": "raid_bdev1", 00:11:00.889 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:11:00.889 "strip_size_kb": 0, 00:11:00.889 "state": "online", 00:11:00.889 "raid_level": "raid1", 00:11:00.889 "superblock": true, 00:11:00.889 "num_base_bdevs": 4, 00:11:00.889 "num_base_bdevs_discovered": 3, 00:11:00.889 "num_base_bdevs_operational": 3, 00:11:00.889 "base_bdevs_list": [ 00:11:00.889 { 00:11:00.889 "name": null, 00:11:00.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.889 "is_configured": false, 00:11:00.889 "data_offset": 2048, 00:11:00.889 "data_size": 63488 00:11:00.889 }, 00:11:00.889 { 00:11:00.889 "name": "pt2", 00:11:00.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.889 "is_configured": true, 00:11:00.889 "data_offset": 2048, 00:11:00.889 "data_size": 63488 00:11:00.889 }, 00:11:00.889 { 00:11:00.889 "name": "pt3", 00:11:00.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.889 "is_configured": true, 00:11:00.889 "data_offset": 2048, 00:11:00.889 "data_size": 63488 00:11:00.889 }, 00:11:00.889 { 00:11:00.889 "name": "pt4", 00:11:00.889 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.889 "is_configured": true, 00:11:00.889 "data_offset": 2048, 00:11:00.889 "data_size": 63488 00:11:00.889 } 00:11:00.889 ] 00:11:00.889 }' 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.889 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.460 [2024-11-20 13:24:42.896814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.460 [2024-11-20 13:24:42.896853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.460 [2024-11-20 13:24:42.896939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.460 [2024-11-20 13:24:42.897036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.460 [2024-11-20 13:24:42.897048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.460 [2024-11-20 13:24:42.968681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:01.460 [2024-11-20 13:24:42.968737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.460 [2024-11-20 13:24:42.968758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:01.460 [2024-11-20 13:24:42.968767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.460 [2024-11-20 13:24:42.970925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.460 [2024-11-20 13:24:42.970961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:01.460 [2024-11-20 13:24:42.971041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:01.460 [2024-11-20 13:24:42.971080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:01.460 [2024-11-20 13:24:42.971218] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:01.460 [2024-11-20 13:24:42.971239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.460 [2024-11-20 13:24:42.971261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:11:01.460 [2024-11-20 13:24:42.971292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.460 [2024-11-20 13:24:42.971389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:01.460 pt1 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.460 13:24:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.460 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.460 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.460 "name": "raid_bdev1", 00:11:01.460 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:11:01.460 "strip_size_kb": 0, 00:11:01.460 "state": "configuring", 00:11:01.460 "raid_level": "raid1", 00:11:01.460 "superblock": true, 00:11:01.460 "num_base_bdevs": 4, 00:11:01.460 "num_base_bdevs_discovered": 2, 00:11:01.460 "num_base_bdevs_operational": 3, 00:11:01.460 "base_bdevs_list": [ 00:11:01.460 { 00:11:01.460 "name": null, 00:11:01.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.460 "is_configured": false, 00:11:01.460 "data_offset": 2048, 00:11:01.460 "data_size": 63488 00:11:01.460 }, 00:11:01.460 { 00:11:01.460 "name": "pt2", 00:11:01.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.460 "is_configured": true, 00:11:01.460 "data_offset": 2048, 00:11:01.460 "data_size": 63488 00:11:01.460 }, 00:11:01.460 { 00:11:01.460 "name": "pt3", 00:11:01.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.460 "is_configured": true, 00:11:01.461 "data_offset": 2048, 00:11:01.461 "data_size": 63488 00:11:01.461 }, 00:11:01.461 { 00:11:01.461 "name": null, 00:11:01.461 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:01.461 "is_configured": false, 00:11:01.461 "data_offset": 2048, 00:11:01.461 "data_size": 63488 00:11:01.461 } 00:11:01.461 ] 00:11:01.461 }' 00:11:01.461 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.461 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.031 [2024-11-20 13:24:43.467826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:02.031 [2024-11-20 13:24:43.467895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.031 [2024-11-20 13:24:43.467916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:02.031 [2024-11-20 13:24:43.467926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.031 [2024-11-20 13:24:43.468343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.031 [2024-11-20 13:24:43.468372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:02.031 [2024-11-20 13:24:43.468444] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:02.031 [2024-11-20 13:24:43.468467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:02.031 [2024-11-20 13:24:43.468566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:11:02.031 [2024-11-20 13:24:43.468581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:02.031 [2024-11-20 13:24:43.468829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:02.031 [2024-11-20 13:24:43.468957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:11:02.031 [2024-11-20 13:24:43.468968] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:11:02.031 [2024-11-20 13:24:43.469097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.031 pt4 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.031 "name": "raid_bdev1", 00:11:02.031 "uuid": "f358dff6-216c-4ce7-83ac-1ff7b1414a89", 00:11:02.031 "strip_size_kb": 0, 00:11:02.031 "state": "online", 00:11:02.031 "raid_level": "raid1", 00:11:02.031 "superblock": true, 00:11:02.031 "num_base_bdevs": 4, 00:11:02.031 "num_base_bdevs_discovered": 3, 00:11:02.031 "num_base_bdevs_operational": 3, 00:11:02.031 "base_bdevs_list": [ 00:11:02.031 { 00:11:02.031 "name": null, 00:11:02.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.031 "is_configured": false, 00:11:02.031 "data_offset": 2048, 00:11:02.031 "data_size": 63488 00:11:02.031 }, 00:11:02.031 { 00:11:02.031 "name": "pt2", 00:11:02.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.031 "is_configured": true, 00:11:02.031 "data_offset": 2048, 00:11:02.031 "data_size": 63488 00:11:02.031 }, 00:11:02.031 { 00:11:02.031 "name": "pt3", 00:11:02.031 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.031 "is_configured": true, 00:11:02.031 "data_offset": 2048, 00:11:02.031 "data_size": 63488 00:11:02.031 }, 00:11:02.031 { 00:11:02.031 "name": "pt4", 00:11:02.031 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:02.031 "is_configured": true, 00:11:02.031 "data_offset": 2048, 00:11:02.031 "data_size": 63488 00:11:02.031 } 00:11:02.031 ] 00:11:02.031 }' 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.031 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.291 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:02.291 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:02.291 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.291 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.291 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.291 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:02.291 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.291 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:02.291 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.291 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.291 [2024-11-20 13:24:43.951440] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.551 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.551 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f358dff6-216c-4ce7-83ac-1ff7b1414a89 '!=' f358dff6-216c-4ce7-83ac-1ff7b1414a89 ']' 00:11:02.551 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84970 00:11:02.551 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84970 ']' 00:11:02.551 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84970 00:11:02.551 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:02.551 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.551 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84970 00:11:02.551 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.551 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.551 killing process with pid 84970 00:11:02.551 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84970' 00:11:02.551 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 84970 00:11:02.551 [2024-11-20 13:24:44.037395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.551 [2024-11-20 13:24:44.037510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.551 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 84970 00:11:02.551 [2024-11-20 13:24:44.037598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.551 [2024-11-20 13:24:44.037608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:11:02.551 [2024-11-20 13:24:44.083168] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.811 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:02.811 00:11:02.811 real 0m7.094s 00:11:02.811 user 0m12.003s 00:11:02.811 sys 0m1.497s 00:11:02.811 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.811 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.811 ************************************ 00:11:02.811 END TEST raid_superblock_test 00:11:02.811 ************************************ 00:11:02.811 13:24:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:02.811 13:24:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:02.811 13:24:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.811 13:24:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.811 ************************************ 00:11:02.811 START TEST raid_read_error_test 00:11:02.811 ************************************ 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VqYeuMSjE7 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85441 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85441 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 85441 ']' 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.811 13:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.070 [2024-11-20 13:24:44.492255] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:03.070 [2024-11-20 13:24:44.492393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85441 ] 00:11:03.070 [2024-11-20 13:24:44.650892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.070 [2024-11-20 13:24:44.678843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.070 [2024-11-20 13:24:44.722099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.071 [2024-11-20 13:24:44.722146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.008 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 BaseBdev1_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 true 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 [2024-11-20 13:24:45.388802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:04.009 [2024-11-20 13:24:45.388864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.009 [2024-11-20 13:24:45.388897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:04.009 [2024-11-20 13:24:45.388907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.009 [2024-11-20 13:24:45.391360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.009 [2024-11-20 13:24:45.391399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:04.009 BaseBdev1 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 BaseBdev2_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 true 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 [2024-11-20 13:24:45.430020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:04.009 [2024-11-20 13:24:45.430072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.009 [2024-11-20 13:24:45.430091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:04.009 [2024-11-20 13:24:45.430109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.009 [2024-11-20 13:24:45.432504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.009 [2024-11-20 13:24:45.432550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:04.009 BaseBdev2 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 BaseBdev3_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 true 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 [2024-11-20 13:24:45.471313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:04.009 [2024-11-20 13:24:45.471365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.009 [2024-11-20 13:24:45.471384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:04.009 [2024-11-20 13:24:45.471393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.009 [2024-11-20 13:24:45.473683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.009 [2024-11-20 13:24:45.473725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:04.009 BaseBdev3 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 BaseBdev4_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 true 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 [2024-11-20 13:24:45.521501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:04.009 [2024-11-20 13:24:45.521558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.009 [2024-11-20 13:24:45.521581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:04.009 [2024-11-20 13:24:45.521590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.009 [2024-11-20 13:24:45.523780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.009 [2024-11-20 13:24:45.523819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:04.009 BaseBdev4 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.009 [2024-11-20 13:24:45.533559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.009 [2024-11-20 13:24:45.535519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.009 [2024-11-20 13:24:45.535609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.009 [2024-11-20 13:24:45.535667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:04.009 [2024-11-20 13:24:45.535902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:11:04.009 [2024-11-20 13:24:45.535921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.009 [2024-11-20 13:24:45.536204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:11:04.009 [2024-11-20 13:24:45.536365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:11:04.009 [2024-11-20 13:24:45.536384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:11:04.009 [2024-11-20 13:24:45.536519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.009 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.010 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.010 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.010 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.010 "name": "raid_bdev1", 00:11:04.010 "uuid": "2b24abda-6a15-40cf-a120-e70d5dd30942", 00:11:04.010 "strip_size_kb": 0, 00:11:04.010 "state": "online", 00:11:04.010 "raid_level": "raid1", 00:11:04.010 "superblock": true, 00:11:04.010 "num_base_bdevs": 4, 00:11:04.010 "num_base_bdevs_discovered": 4, 00:11:04.010 "num_base_bdevs_operational": 4, 00:11:04.010 "base_bdevs_list": [ 00:11:04.010 { 00:11:04.010 "name": "BaseBdev1", 00:11:04.010 "uuid": "6e4c5309-5530-5b65-9224-c5c28cc2e5c8", 00:11:04.010 "is_configured": true, 00:11:04.010 "data_offset": 2048, 00:11:04.010 "data_size": 63488 00:11:04.010 }, 00:11:04.010 { 00:11:04.010 "name": "BaseBdev2", 00:11:04.010 "uuid": "85025279-6913-5317-a923-3c44442a1aee", 00:11:04.010 "is_configured": true, 00:11:04.010 "data_offset": 2048, 00:11:04.010 "data_size": 63488 00:11:04.010 }, 00:11:04.010 { 00:11:04.010 "name": "BaseBdev3", 00:11:04.010 "uuid": "81d3cac5-b3fb-593d-a502-b632f378ef02", 00:11:04.010 "is_configured": true, 00:11:04.010 "data_offset": 2048, 00:11:04.010 "data_size": 63488 00:11:04.010 }, 00:11:04.010 { 00:11:04.010 "name": "BaseBdev4", 00:11:04.010 "uuid": "1697678e-2c1f-558c-9211-bbb2116c6583", 00:11:04.010 "is_configured": true, 00:11:04.010 "data_offset": 2048, 00:11:04.010 "data_size": 63488 00:11:04.010 } 00:11:04.010 ] 00:11:04.010 }' 00:11:04.010 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.010 13:24:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.269 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:04.269 13:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:04.528 [2024-11-20 13:24:45.997164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.467 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.467 "name": "raid_bdev1", 00:11:05.467 "uuid": "2b24abda-6a15-40cf-a120-e70d5dd30942", 00:11:05.467 "strip_size_kb": 0, 00:11:05.467 "state": "online", 00:11:05.467 "raid_level": "raid1", 00:11:05.467 "superblock": true, 00:11:05.467 "num_base_bdevs": 4, 00:11:05.468 "num_base_bdevs_discovered": 4, 00:11:05.468 "num_base_bdevs_operational": 4, 00:11:05.468 "base_bdevs_list": [ 00:11:05.468 { 00:11:05.468 "name": "BaseBdev1", 00:11:05.468 "uuid": "6e4c5309-5530-5b65-9224-c5c28cc2e5c8", 00:11:05.468 "is_configured": true, 00:11:05.468 "data_offset": 2048, 00:11:05.468 "data_size": 63488 00:11:05.468 }, 00:11:05.468 { 00:11:05.468 "name": "BaseBdev2", 00:11:05.468 "uuid": "85025279-6913-5317-a923-3c44442a1aee", 00:11:05.468 "is_configured": true, 00:11:05.468 "data_offset": 2048, 00:11:05.468 "data_size": 63488 00:11:05.468 }, 00:11:05.468 { 00:11:05.468 "name": "BaseBdev3", 00:11:05.468 "uuid": "81d3cac5-b3fb-593d-a502-b632f378ef02", 00:11:05.468 "is_configured": true, 00:11:05.468 "data_offset": 2048, 00:11:05.468 "data_size": 63488 00:11:05.468 }, 00:11:05.468 { 00:11:05.468 "name": "BaseBdev4", 00:11:05.468 "uuid": "1697678e-2c1f-558c-9211-bbb2116c6583", 00:11:05.468 "is_configured": true, 00:11:05.468 "data_offset": 2048, 00:11:05.468 "data_size": 63488 00:11:05.468 } 00:11:05.468 ] 00:11:05.468 }' 00:11:05.468 13:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.468 13:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.726 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.726 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.726 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.726 [2024-11-20 13:24:47.368010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.727 [2024-11-20 13:24:47.368050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.727 [2024-11-20 13:24:47.370827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.727 [2024-11-20 13:24:47.370893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.727 [2024-11-20 13:24:47.371050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.727 [2024-11-20 13:24:47.371067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:11:05.727 { 00:11:05.727 "results": [ 00:11:05.727 { 00:11:05.727 "job": "raid_bdev1", 00:11:05.727 "core_mask": "0x1", 00:11:05.727 "workload": "randrw", 00:11:05.727 "percentage": 50, 00:11:05.727 "status": "finished", 00:11:05.727 "queue_depth": 1, 00:11:05.727 "io_size": 131072, 00:11:05.727 "runtime": 1.371536, 00:11:05.727 "iops": 10803.945357613653, 00:11:05.727 "mibps": 1350.4931697017066, 00:11:05.727 "io_failed": 0, 00:11:05.727 "io_timeout": 0, 00:11:05.727 "avg_latency_us": 89.77094846878663, 00:11:05.727 "min_latency_us": 23.14061135371179, 00:11:05.727 "max_latency_us": 1745.7187772925763 00:11:05.727 } 00:11:05.727 ], 00:11:05.727 "core_count": 1 00:11:05.727 } 00:11:05.727 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.727 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85441 00:11:05.727 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 85441 ']' 00:11:05.727 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 85441 00:11:05.727 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:05.727 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.727 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85441 00:11:05.984 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.984 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.984 killing process with pid 85441 00:11:05.984 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85441' 00:11:05.984 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 85441 00:11:05.984 [2024-11-20 13:24:47.419387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.984 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 85441 00:11:05.984 [2024-11-20 13:24:47.457258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.244 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VqYeuMSjE7 00:11:06.244 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:06.244 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:06.244 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:06.244 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:06.244 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.244 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:06.244 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:06.244 00:11:06.244 real 0m3.312s 00:11:06.244 user 0m4.177s 00:11:06.244 sys 0m0.540s 00:11:06.244 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.244 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.244 ************************************ 00:11:06.244 END TEST raid_read_error_test 00:11:06.244 ************************************ 00:11:06.244 13:24:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:06.244 13:24:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:06.244 13:24:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.244 13:24:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.244 ************************************ 00:11:06.244 START TEST raid_write_error_test 00:11:06.244 ************************************ 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hy65wrrntI 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85570 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85570 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 85570 ']' 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.244 13:24:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.244 [2024-11-20 13:24:47.852725] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:06.244 [2024-11-20 13:24:47.852982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85570 ] 00:11:06.503 [2024-11-20 13:24:48.001045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.503 [2024-11-20 13:24:48.035760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.503 [2024-11-20 13:24:48.088410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.503 [2024-11-20 13:24:48.088475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.071 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.071 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:07.072 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.072 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:07.072 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.072 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.072 BaseBdev1_malloc 00:11:07.072 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.072 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:07.072 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.072 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 true 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 [2024-11-20 13:24:48.754356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:07.332 [2024-11-20 13:24:48.754429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.332 [2024-11-20 13:24:48.754459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:07.332 [2024-11-20 13:24:48.754471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.332 [2024-11-20 13:24:48.757087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.332 [2024-11-20 13:24:48.757130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:07.332 BaseBdev1 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 BaseBdev2_malloc 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 true 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 [2024-11-20 13:24:48.783896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:07.332 [2024-11-20 13:24:48.783956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.332 [2024-11-20 13:24:48.783979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:07.332 [2024-11-20 13:24:48.784020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.332 [2024-11-20 13:24:48.786541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.332 [2024-11-20 13:24:48.786590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:07.332 BaseBdev2 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 BaseBdev3_malloc 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 true 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 [2024-11-20 13:24:48.813457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:07.332 [2024-11-20 13:24:48.813516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.332 [2024-11-20 13:24:48.813539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:07.332 [2024-11-20 13:24:48.813549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.332 [2024-11-20 13:24:48.815911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.332 [2024-11-20 13:24:48.815954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:07.332 BaseBdev3 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 BaseBdev4_malloc 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 true 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 [2024-11-20 13:24:48.851623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:07.332 [2024-11-20 13:24:48.851684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.332 [2024-11-20 13:24:48.851713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:07.332 [2024-11-20 13:24:48.851723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.332 [2024-11-20 13:24:48.853958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.332 [2024-11-20 13:24:48.854013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:07.332 BaseBdev4 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 [2024-11-20 13:24:48.859636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.332 [2024-11-20 13:24:48.861623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.332 [2024-11-20 13:24:48.861705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.332 [2024-11-20 13:24:48.861792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.332 [2024-11-20 13:24:48.862050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:11:07.332 [2024-11-20 13:24:48.862070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:07.332 [2024-11-20 13:24:48.862374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:11:07.332 [2024-11-20 13:24:48.862558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:11:07.332 [2024-11-20 13:24:48.862579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:11:07.332 [2024-11-20 13:24:48.862731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.332 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.332 "name": "raid_bdev1", 00:11:07.332 "uuid": "67b0dece-0e44-4994-b406-5d9e216ce73e", 00:11:07.332 "strip_size_kb": 0, 00:11:07.332 "state": "online", 00:11:07.333 "raid_level": "raid1", 00:11:07.333 "superblock": true, 00:11:07.333 "num_base_bdevs": 4, 00:11:07.333 "num_base_bdevs_discovered": 4, 00:11:07.333 "num_base_bdevs_operational": 4, 00:11:07.333 "base_bdevs_list": [ 00:11:07.333 { 00:11:07.333 "name": "BaseBdev1", 00:11:07.333 "uuid": "3b36f742-0717-5451-aee5-a9bc4381916f", 00:11:07.333 "is_configured": true, 00:11:07.333 "data_offset": 2048, 00:11:07.333 "data_size": 63488 00:11:07.333 }, 00:11:07.333 { 00:11:07.333 "name": "BaseBdev2", 00:11:07.333 "uuid": "64587202-5b29-5931-9afc-973ea59dbe6b", 00:11:07.333 "is_configured": true, 00:11:07.333 "data_offset": 2048, 00:11:07.333 "data_size": 63488 00:11:07.333 }, 00:11:07.333 { 00:11:07.333 "name": "BaseBdev3", 00:11:07.333 "uuid": "162c927c-c561-5fc2-8faa-0af06a606484", 00:11:07.333 "is_configured": true, 00:11:07.333 "data_offset": 2048, 00:11:07.333 "data_size": 63488 00:11:07.333 }, 00:11:07.333 { 00:11:07.333 "name": "BaseBdev4", 00:11:07.333 "uuid": "76b290ea-ebed-54dc-9b2c-1ee37b7c792f", 00:11:07.333 "is_configured": true, 00:11:07.333 "data_offset": 2048, 00:11:07.333 "data_size": 63488 00:11:07.333 } 00:11:07.333 ] 00:11:07.333 }' 00:11:07.333 13:24:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.333 13:24:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.901 13:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:07.901 13:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:07.901 [2024-11-20 13:24:49.375168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.837 [2024-11-20 13:24:50.288493] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:08.837 [2024-11-20 13:24:50.288550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.837 [2024-11-20 13:24:50.288791] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000003090 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.837 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.837 "name": "raid_bdev1", 00:11:08.838 "uuid": "67b0dece-0e44-4994-b406-5d9e216ce73e", 00:11:08.838 "strip_size_kb": 0, 00:11:08.838 "state": "online", 00:11:08.838 "raid_level": "raid1", 00:11:08.838 "superblock": true, 00:11:08.838 "num_base_bdevs": 4, 00:11:08.838 "num_base_bdevs_discovered": 3, 00:11:08.838 "num_base_bdevs_operational": 3, 00:11:08.838 "base_bdevs_list": [ 00:11:08.838 { 00:11:08.838 "name": null, 00:11:08.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.838 "is_configured": false, 00:11:08.838 "data_offset": 0, 00:11:08.838 "data_size": 63488 00:11:08.838 }, 00:11:08.838 { 00:11:08.838 "name": "BaseBdev2", 00:11:08.838 "uuid": "64587202-5b29-5931-9afc-973ea59dbe6b", 00:11:08.838 "is_configured": true, 00:11:08.838 "data_offset": 2048, 00:11:08.838 "data_size": 63488 00:11:08.838 }, 00:11:08.838 { 00:11:08.838 "name": "BaseBdev3", 00:11:08.838 "uuid": "162c927c-c561-5fc2-8faa-0af06a606484", 00:11:08.838 "is_configured": true, 00:11:08.838 "data_offset": 2048, 00:11:08.838 "data_size": 63488 00:11:08.838 }, 00:11:08.838 { 00:11:08.838 "name": "BaseBdev4", 00:11:08.838 "uuid": "76b290ea-ebed-54dc-9b2c-1ee37b7c792f", 00:11:08.838 "is_configured": true, 00:11:08.838 "data_offset": 2048, 00:11:08.838 "data_size": 63488 00:11:08.838 } 00:11:08.838 ] 00:11:08.838 }' 00:11:08.838 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.838 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.097 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:09.097 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.097 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.097 [2024-11-20 13:24:50.761343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.097 [2024-11-20 13:24:50.761382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.097 [2024-11-20 13:24:50.763988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.097 [2024-11-20 13:24:50.764049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.097 [2024-11-20 13:24:50.764153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.097 [2024-11-20 13:24:50.764170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.356 { 00:11:09.356 "results": [ 00:11:09.356 { 00:11:09.356 "job": "raid_bdev1", 00:11:09.356 "core_mask": "0x1", 00:11:09.356 "workload": "randrw", 00:11:09.356 "percentage": 50, 00:11:09.356 "status": "finished", 00:11:09.356 "queue_depth": 1, 00:11:09.356 "io_size": 131072, 00:11:09.356 "runtime": 1.3867, 00:11:09.356 "iops": 12242.73454965025, 00:11:09.356 "mibps": 1530.3418187062812, 00:11:09.356 "io_failed": 0, 00:11:09.356 "io_timeout": 0, 00:11:09.356 "avg_latency_us": 79.0410394952534, 00:11:09.356 "min_latency_us": 22.91703056768559, 00:11:09.356 "max_latency_us": 1717.1004366812226 00:11:09.356 } 00:11:09.356 ], 00:11:09.356 "core_count": 1 00:11:09.356 } 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85570 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 85570 ']' 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 85570 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85570 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.356 killing process with pid 85570 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85570' 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 85570 00:11:09.356 [2024-11-20 13:24:50.808793] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.356 13:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 85570 00:11:09.356 [2024-11-20 13:24:50.845048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.615 13:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hy65wrrntI 00:11:09.615 13:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:09.615 13:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:09.615 13:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:09.615 13:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:09.615 13:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.615 13:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:09.615 13:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:09.615 00:11:09.615 real 0m3.313s 00:11:09.615 user 0m4.257s 00:11:09.615 sys 0m0.524s 00:11:09.615 13:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.615 13:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.615 ************************************ 00:11:09.615 END TEST raid_write_error_test 00:11:09.615 ************************************ 00:11:09.615 13:24:51 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:09.615 13:24:51 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:09.615 13:24:51 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:09.615 13:24:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:09.615 13:24:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.615 13:24:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.615 ************************************ 00:11:09.615 START TEST raid_rebuild_test 00:11:09.615 ************************************ 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85701 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85701 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85701 ']' 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.615 13:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.615 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:09.615 Zero copy mechanism will not be used. 00:11:09.615 [2024-11-20 13:24:51.223702] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:09.615 [2024-11-20 13:24:51.223829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85701 ] 00:11:09.874 [2024-11-20 13:24:51.377514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.874 [2024-11-20 13:24:51.405926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.874 [2024-11-20 13:24:51.449644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.874 [2024-11-20 13:24:51.449687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.441 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.441 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:10.441 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:10.441 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:10.441 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.441 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.441 BaseBdev1_malloc 00:11:10.441 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.441 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:10.441 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.441 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.441 [2024-11-20 13:24:52.104557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:10.441 [2024-11-20 13:24:52.104667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.441 [2024-11-20 13:24:52.104701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:10.441 [2024-11-20 13:24:52.104714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.441 [2024-11-20 13:24:52.106999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.441 [2024-11-20 13:24:52.107049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:10.700 BaseBdev1 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.700 BaseBdev2_malloc 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.700 [2024-11-20 13:24:52.133481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:10.700 [2024-11-20 13:24:52.133561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.700 [2024-11-20 13:24:52.133584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:10.700 [2024-11-20 13:24:52.133594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.700 [2024-11-20 13:24:52.135767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.700 [2024-11-20 13:24:52.135812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:10.700 BaseBdev2 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.700 spare_malloc 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.700 spare_delay 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.700 [2024-11-20 13:24:52.174346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:10.700 [2024-11-20 13:24:52.174433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.700 [2024-11-20 13:24:52.174462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:10.700 [2024-11-20 13:24:52.174471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.700 [2024-11-20 13:24:52.176741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.700 [2024-11-20 13:24:52.176783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:10.700 spare 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.700 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.700 [2024-11-20 13:24:52.186385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.700 [2024-11-20 13:24:52.188333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.700 [2024-11-20 13:24:52.188456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:10.700 [2024-11-20 13:24:52.188470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:10.700 [2024-11-20 13:24:52.188840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:10.700 [2024-11-20 13:24:52.189010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:10.700 [2024-11-20 13:24:52.189026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:10.701 [2024-11-20 13:24:52.189327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.701 "name": "raid_bdev1", 00:11:10.701 "uuid": "2821da60-d9d5-4f8b-81b4-95842a3e5cbc", 00:11:10.701 "strip_size_kb": 0, 00:11:10.701 "state": "online", 00:11:10.701 "raid_level": "raid1", 00:11:10.701 "superblock": false, 00:11:10.701 "num_base_bdevs": 2, 00:11:10.701 "num_base_bdevs_discovered": 2, 00:11:10.701 "num_base_bdevs_operational": 2, 00:11:10.701 "base_bdevs_list": [ 00:11:10.701 { 00:11:10.701 "name": "BaseBdev1", 00:11:10.701 "uuid": "6085604c-99fd-50da-ba24-e23dd4890ec9", 00:11:10.701 "is_configured": true, 00:11:10.701 "data_offset": 0, 00:11:10.701 "data_size": 65536 00:11:10.701 }, 00:11:10.701 { 00:11:10.701 "name": "BaseBdev2", 00:11:10.701 "uuid": "c28c48d4-8e24-5077-a474-8d5de144b4af", 00:11:10.701 "is_configured": true, 00:11:10.701 "data_offset": 0, 00:11:10.701 "data_size": 65536 00:11:10.701 } 00:11:10.701 ] 00:11:10.701 }' 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.701 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:11.268 [2024-11-20 13:24:52.649842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:11.268 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:11.269 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:11.269 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:11.269 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:11.269 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:11.269 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:11.269 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:11.269 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:11.269 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:11.269 [2024-11-20 13:24:52.921178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:11.528 /dev/nbd0 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.528 1+0 records in 00:11:11.528 1+0 records out 00:11:11.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471881 s, 8.7 MB/s 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:11.528 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:11.529 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.529 13:24:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:11.529 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:11.529 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:11.529 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:15.720 65536+0 records in 00:11:15.720 65536+0 records out 00:11:15.720 33554432 bytes (34 MB, 32 MiB) copied, 4.15733 s, 8.1 MB/s 00:11:15.720 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:15.720 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:15.720 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:15.720 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:15.720 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:15.720 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.720 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:15.720 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:15.979 [2024-11-20 13:24:57.388232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.979 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:15.979 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:15.979 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.979 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.980 [2024-11-20 13:24:57.408325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.980 "name": "raid_bdev1", 00:11:15.980 "uuid": "2821da60-d9d5-4f8b-81b4-95842a3e5cbc", 00:11:15.980 "strip_size_kb": 0, 00:11:15.980 "state": "online", 00:11:15.980 "raid_level": "raid1", 00:11:15.980 "superblock": false, 00:11:15.980 "num_base_bdevs": 2, 00:11:15.980 "num_base_bdevs_discovered": 1, 00:11:15.980 "num_base_bdevs_operational": 1, 00:11:15.980 "base_bdevs_list": [ 00:11:15.980 { 00:11:15.980 "name": null, 00:11:15.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.980 "is_configured": false, 00:11:15.980 "data_offset": 0, 00:11:15.980 "data_size": 65536 00:11:15.980 }, 00:11:15.980 { 00:11:15.980 "name": "BaseBdev2", 00:11:15.980 "uuid": "c28c48d4-8e24-5077-a474-8d5de144b4af", 00:11:15.980 "is_configured": true, 00:11:15.980 "data_offset": 0, 00:11:15.980 "data_size": 65536 00:11:15.980 } 00:11:15.980 ] 00:11:15.980 }' 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.980 13:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.240 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:16.240 13:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.240 13:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.240 [2024-11-20 13:24:57.839639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:16.240 [2024-11-20 13:24:57.858963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:11:16.240 13:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.240 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:16.240 [2024-11-20 13:24:57.861751] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:17.618 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:17.618 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:17.618 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:17.618 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:17.618 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:17.618 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.618 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.618 13:24:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.618 13:24:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.618 13:24:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.618 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:17.618 "name": "raid_bdev1", 00:11:17.618 "uuid": "2821da60-d9d5-4f8b-81b4-95842a3e5cbc", 00:11:17.618 "strip_size_kb": 0, 00:11:17.618 "state": "online", 00:11:17.618 "raid_level": "raid1", 00:11:17.618 "superblock": false, 00:11:17.618 "num_base_bdevs": 2, 00:11:17.618 "num_base_bdevs_discovered": 2, 00:11:17.619 "num_base_bdevs_operational": 2, 00:11:17.619 "process": { 00:11:17.619 "type": "rebuild", 00:11:17.619 "target": "spare", 00:11:17.619 "progress": { 00:11:17.619 "blocks": 20480, 00:11:17.619 "percent": 31 00:11:17.619 } 00:11:17.619 }, 00:11:17.619 "base_bdevs_list": [ 00:11:17.619 { 00:11:17.619 "name": "spare", 00:11:17.619 "uuid": "a3aba35a-ad1f-5c61-b4b5-692ede5e1279", 00:11:17.619 "is_configured": true, 00:11:17.619 "data_offset": 0, 00:11:17.619 "data_size": 65536 00:11:17.619 }, 00:11:17.619 { 00:11:17.619 "name": "BaseBdev2", 00:11:17.619 "uuid": "c28c48d4-8e24-5077-a474-8d5de144b4af", 00:11:17.619 "is_configured": true, 00:11:17.619 "data_offset": 0, 00:11:17.619 "data_size": 65536 00:11:17.619 } 00:11:17.619 ] 00:11:17.619 }' 00:11:17.619 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:17.619 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:17.619 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.619 [2024-11-20 13:24:59.021485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:17.619 [2024-11-20 13:24:59.067890] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:17.619 [2024-11-20 13:24:59.067980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.619 [2024-11-20 13:24:59.068015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:17.619 [2024-11-20 13:24:59.068024] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.619 "name": "raid_bdev1", 00:11:17.619 "uuid": "2821da60-d9d5-4f8b-81b4-95842a3e5cbc", 00:11:17.619 "strip_size_kb": 0, 00:11:17.619 "state": "online", 00:11:17.619 "raid_level": "raid1", 00:11:17.619 "superblock": false, 00:11:17.619 "num_base_bdevs": 2, 00:11:17.619 "num_base_bdevs_discovered": 1, 00:11:17.619 "num_base_bdevs_operational": 1, 00:11:17.619 "base_bdevs_list": [ 00:11:17.619 { 00:11:17.619 "name": null, 00:11:17.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.619 "is_configured": false, 00:11:17.619 "data_offset": 0, 00:11:17.619 "data_size": 65536 00:11:17.619 }, 00:11:17.619 { 00:11:17.619 "name": "BaseBdev2", 00:11:17.619 "uuid": "c28c48d4-8e24-5077-a474-8d5de144b4af", 00:11:17.619 "is_configured": true, 00:11:17.619 "data_offset": 0, 00:11:17.619 "data_size": 65536 00:11:17.619 } 00:11:17.619 ] 00:11:17.619 }' 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.619 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.879 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:17.879 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:17.879 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:17.879 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:17.879 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:18.139 "name": "raid_bdev1", 00:11:18.139 "uuid": "2821da60-d9d5-4f8b-81b4-95842a3e5cbc", 00:11:18.139 "strip_size_kb": 0, 00:11:18.139 "state": "online", 00:11:18.139 "raid_level": "raid1", 00:11:18.139 "superblock": false, 00:11:18.139 "num_base_bdevs": 2, 00:11:18.139 "num_base_bdevs_discovered": 1, 00:11:18.139 "num_base_bdevs_operational": 1, 00:11:18.139 "base_bdevs_list": [ 00:11:18.139 { 00:11:18.139 "name": null, 00:11:18.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.139 "is_configured": false, 00:11:18.139 "data_offset": 0, 00:11:18.139 "data_size": 65536 00:11:18.139 }, 00:11:18.139 { 00:11:18.139 "name": "BaseBdev2", 00:11:18.139 "uuid": "c28c48d4-8e24-5077-a474-8d5de144b4af", 00:11:18.139 "is_configured": true, 00:11:18.139 "data_offset": 0, 00:11:18.139 "data_size": 65536 00:11:18.139 } 00:11:18.139 ] 00:11:18.139 }' 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.139 [2024-11-20 13:24:59.688274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:18.139 [2024-11-20 13:24:59.693666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.139 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:18.139 [2024-11-20 13:24:59.696046] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:19.079 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:19.079 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.079 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:19.079 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:19.079 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.079 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.079 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.079 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.079 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.079 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.341 "name": "raid_bdev1", 00:11:19.341 "uuid": "2821da60-d9d5-4f8b-81b4-95842a3e5cbc", 00:11:19.341 "strip_size_kb": 0, 00:11:19.341 "state": "online", 00:11:19.341 "raid_level": "raid1", 00:11:19.341 "superblock": false, 00:11:19.341 "num_base_bdevs": 2, 00:11:19.341 "num_base_bdevs_discovered": 2, 00:11:19.341 "num_base_bdevs_operational": 2, 00:11:19.341 "process": { 00:11:19.341 "type": "rebuild", 00:11:19.341 "target": "spare", 00:11:19.341 "progress": { 00:11:19.341 "blocks": 20480, 00:11:19.341 "percent": 31 00:11:19.341 } 00:11:19.341 }, 00:11:19.341 "base_bdevs_list": [ 00:11:19.341 { 00:11:19.341 "name": "spare", 00:11:19.341 "uuid": "a3aba35a-ad1f-5c61-b4b5-692ede5e1279", 00:11:19.341 "is_configured": true, 00:11:19.341 "data_offset": 0, 00:11:19.341 "data_size": 65536 00:11:19.341 }, 00:11:19.341 { 00:11:19.341 "name": "BaseBdev2", 00:11:19.341 "uuid": "c28c48d4-8e24-5077-a474-8d5de144b4af", 00:11:19.341 "is_configured": true, 00:11:19.341 "data_offset": 0, 00:11:19.341 "data_size": 65536 00:11:19.341 } 00:11:19.341 ] 00:11:19.341 }' 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=289 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.341 "name": "raid_bdev1", 00:11:19.341 "uuid": "2821da60-d9d5-4f8b-81b4-95842a3e5cbc", 00:11:19.341 "strip_size_kb": 0, 00:11:19.341 "state": "online", 00:11:19.341 "raid_level": "raid1", 00:11:19.341 "superblock": false, 00:11:19.341 "num_base_bdevs": 2, 00:11:19.341 "num_base_bdevs_discovered": 2, 00:11:19.341 "num_base_bdevs_operational": 2, 00:11:19.341 "process": { 00:11:19.341 "type": "rebuild", 00:11:19.341 "target": "spare", 00:11:19.341 "progress": { 00:11:19.341 "blocks": 22528, 00:11:19.341 "percent": 34 00:11:19.341 } 00:11:19.341 }, 00:11:19.341 "base_bdevs_list": [ 00:11:19.341 { 00:11:19.341 "name": "spare", 00:11:19.341 "uuid": "a3aba35a-ad1f-5c61-b4b5-692ede5e1279", 00:11:19.341 "is_configured": true, 00:11:19.341 "data_offset": 0, 00:11:19.341 "data_size": 65536 00:11:19.341 }, 00:11:19.341 { 00:11:19.341 "name": "BaseBdev2", 00:11:19.341 "uuid": "c28c48d4-8e24-5077-a474-8d5de144b4af", 00:11:19.341 "is_configured": true, 00:11:19.341 "data_offset": 0, 00:11:19.341 "data_size": 65536 00:11:19.341 } 00:11:19.341 ] 00:11:19.341 }' 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:19.341 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.601 13:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:19.601 13:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.539 "name": "raid_bdev1", 00:11:20.539 "uuid": "2821da60-d9d5-4f8b-81b4-95842a3e5cbc", 00:11:20.539 "strip_size_kb": 0, 00:11:20.539 "state": "online", 00:11:20.539 "raid_level": "raid1", 00:11:20.539 "superblock": false, 00:11:20.539 "num_base_bdevs": 2, 00:11:20.539 "num_base_bdevs_discovered": 2, 00:11:20.539 "num_base_bdevs_operational": 2, 00:11:20.539 "process": { 00:11:20.539 "type": "rebuild", 00:11:20.539 "target": "spare", 00:11:20.539 "progress": { 00:11:20.539 "blocks": 47104, 00:11:20.539 "percent": 71 00:11:20.539 } 00:11:20.539 }, 00:11:20.539 "base_bdevs_list": [ 00:11:20.539 { 00:11:20.539 "name": "spare", 00:11:20.539 "uuid": "a3aba35a-ad1f-5c61-b4b5-692ede5e1279", 00:11:20.539 "is_configured": true, 00:11:20.539 "data_offset": 0, 00:11:20.539 "data_size": 65536 00:11:20.539 }, 00:11:20.539 { 00:11:20.539 "name": "BaseBdev2", 00:11:20.539 "uuid": "c28c48d4-8e24-5077-a474-8d5de144b4af", 00:11:20.539 "is_configured": true, 00:11:20.539 "data_offset": 0, 00:11:20.539 "data_size": 65536 00:11:20.539 } 00:11:20.539 ] 00:11:20.539 }' 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:20.539 13:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:21.477 [2024-11-20 13:25:02.910732] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:21.477 [2024-11-20 13:25:02.910852] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:21.477 [2024-11-20 13:25:02.910915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.736 "name": "raid_bdev1", 00:11:21.736 "uuid": "2821da60-d9d5-4f8b-81b4-95842a3e5cbc", 00:11:21.736 "strip_size_kb": 0, 00:11:21.736 "state": "online", 00:11:21.736 "raid_level": "raid1", 00:11:21.736 "superblock": false, 00:11:21.736 "num_base_bdevs": 2, 00:11:21.736 "num_base_bdevs_discovered": 2, 00:11:21.736 "num_base_bdevs_operational": 2, 00:11:21.736 "base_bdevs_list": [ 00:11:21.736 { 00:11:21.736 "name": "spare", 00:11:21.736 "uuid": "a3aba35a-ad1f-5c61-b4b5-692ede5e1279", 00:11:21.736 "is_configured": true, 00:11:21.736 "data_offset": 0, 00:11:21.736 "data_size": 65536 00:11:21.736 }, 00:11:21.736 { 00:11:21.736 "name": "BaseBdev2", 00:11:21.736 "uuid": "c28c48d4-8e24-5077-a474-8d5de144b4af", 00:11:21.736 "is_configured": true, 00:11:21.736 "data_offset": 0, 00:11:21.736 "data_size": 65536 00:11:21.736 } 00:11:21.736 ] 00:11:21.736 }' 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.736 "name": "raid_bdev1", 00:11:21.736 "uuid": "2821da60-d9d5-4f8b-81b4-95842a3e5cbc", 00:11:21.736 "strip_size_kb": 0, 00:11:21.736 "state": "online", 00:11:21.736 "raid_level": "raid1", 00:11:21.736 "superblock": false, 00:11:21.736 "num_base_bdevs": 2, 00:11:21.736 "num_base_bdevs_discovered": 2, 00:11:21.736 "num_base_bdevs_operational": 2, 00:11:21.736 "base_bdevs_list": [ 00:11:21.736 { 00:11:21.736 "name": "spare", 00:11:21.736 "uuid": "a3aba35a-ad1f-5c61-b4b5-692ede5e1279", 00:11:21.736 "is_configured": true, 00:11:21.736 "data_offset": 0, 00:11:21.736 "data_size": 65536 00:11:21.736 }, 00:11:21.736 { 00:11:21.736 "name": "BaseBdev2", 00:11:21.736 "uuid": "c28c48d4-8e24-5077-a474-8d5de144b4af", 00:11:21.736 "is_configured": true, 00:11:21.736 "data_offset": 0, 00:11:21.736 "data_size": 65536 00:11:21.736 } 00:11:21.736 ] 00:11:21.736 }' 00:11:21.736 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.995 "name": "raid_bdev1", 00:11:21.995 "uuid": "2821da60-d9d5-4f8b-81b4-95842a3e5cbc", 00:11:21.995 "strip_size_kb": 0, 00:11:21.995 "state": "online", 00:11:21.995 "raid_level": "raid1", 00:11:21.995 "superblock": false, 00:11:21.995 "num_base_bdevs": 2, 00:11:21.995 "num_base_bdevs_discovered": 2, 00:11:21.995 "num_base_bdevs_operational": 2, 00:11:21.995 "base_bdevs_list": [ 00:11:21.995 { 00:11:21.995 "name": "spare", 00:11:21.995 "uuid": "a3aba35a-ad1f-5c61-b4b5-692ede5e1279", 00:11:21.995 "is_configured": true, 00:11:21.995 "data_offset": 0, 00:11:21.995 "data_size": 65536 00:11:21.995 }, 00:11:21.995 { 00:11:21.995 "name": "BaseBdev2", 00:11:21.995 "uuid": "c28c48d4-8e24-5077-a474-8d5de144b4af", 00:11:21.995 "is_configured": true, 00:11:21.995 "data_offset": 0, 00:11:21.995 "data_size": 65536 00:11:21.995 } 00:11:21.995 ] 00:11:21.995 }' 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.995 13:25:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.562 [2024-11-20 13:25:04.014243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.562 [2024-11-20 13:25:04.014283] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.562 [2024-11-20 13:25:04.014390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.562 [2024-11-20 13:25:04.014477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.562 [2024-11-20 13:25:04.014496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:22.562 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:22.820 /dev/nbd0 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:22.820 1+0 records in 00:11:22.820 1+0 records out 00:11:22.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231916 s, 17.7 MB/s 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:22.820 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:23.079 /dev/nbd1 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.079 1+0 records in 00:11:23.079 1+0 records out 00:11:23.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277899 s, 14.7 MB/s 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.079 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:23.344 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:23.344 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:23.344 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:23.344 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.344 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.344 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:23.344 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:23.344 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.344 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.344 13:25:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:23.611 13:25:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:23.611 13:25:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85701 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85701 ']' 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85701 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85701 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85701' 00:11:23.612 killing process with pid 85701 00:11:23.612 Received shutdown signal, test time was about 60.000000 seconds 00:11:23.612 00:11:23.612 Latency(us) 00:11:23.612 [2024-11-20T13:25:05.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.612 [2024-11-20T13:25:05.280Z] =================================================================================================================== 00:11:23.612 [2024-11-20T13:25:05.280Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 85701 00:11:23.612 [2024-11-20 13:25:05.209430] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.612 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 85701 00:11:23.612 [2024-11-20 13:25:05.242441] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:23.870 00:11:23.870 real 0m14.328s 00:11:23.870 user 0m16.733s 00:11:23.870 sys 0m2.977s 00:11:23.870 ************************************ 00:11:23.870 END TEST raid_rebuild_test 00:11:23.870 ************************************ 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.870 13:25:05 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:23.870 13:25:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:23.870 13:25:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.870 13:25:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.870 ************************************ 00:11:23.870 START TEST raid_rebuild_test_sb 00:11:23.870 ************************************ 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:23.870 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:23.871 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:23.871 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:23.871 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:23.871 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:23.871 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86108 00:11:23.871 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86108 00:11:23.871 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86108 ']' 00:11:23.871 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.871 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.129 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:24.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.129 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.129 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.129 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.129 [2024-11-20 13:25:05.641440] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:24.129 [2024-11-20 13:25:05.641758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86108 ] 00:11:24.129 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:24.129 Zero copy mechanism will not be used. 00:11:24.389 [2024-11-20 13:25:05.806785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.389 [2024-11-20 13:25:05.837940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.389 [2024-11-20 13:25:05.884870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.389 [2024-11-20 13:25:05.885032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.956 BaseBdev1_malloc 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.956 [2024-11-20 13:25:06.574217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:24.956 [2024-11-20 13:25:06.574300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.956 [2024-11-20 13:25:06.574341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:24.956 [2024-11-20 13:25:06.574357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.956 [2024-11-20 13:25:06.576946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.956 [2024-11-20 13:25:06.577011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:24.956 BaseBdev1 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.956 BaseBdev2_malloc 00:11:24.956 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.957 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:24.957 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.957 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.957 [2024-11-20 13:25:06.603732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:24.957 [2024-11-20 13:25:06.603880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.957 [2024-11-20 13:25:06.603914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:24.957 [2024-11-20 13:25:06.603926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.957 [2024-11-20 13:25:06.606469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.957 [2024-11-20 13:25:06.606525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:24.957 BaseBdev2 00:11:24.957 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.957 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:24.957 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.957 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.215 spare_malloc 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.215 spare_delay 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.215 [2024-11-20 13:25:06.645251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:25.215 [2024-11-20 13:25:06.645323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.215 [2024-11-20 13:25:06.645353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:25.215 [2024-11-20 13:25:06.645365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.215 [2024-11-20 13:25:06.647925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.215 [2024-11-20 13:25:06.647971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:25.215 spare 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.215 [2024-11-20 13:25:06.657292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.215 [2024-11-20 13:25:06.659546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.215 [2024-11-20 13:25:06.659754] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:25.215 [2024-11-20 13:25:06.659771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:25.215 [2024-11-20 13:25:06.660138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:25.215 [2024-11-20 13:25:06.660319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:25.215 [2024-11-20 13:25:06.660342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:25.215 [2024-11-20 13:25:06.660503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.215 "name": "raid_bdev1", 00:11:25.215 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:25.215 "strip_size_kb": 0, 00:11:25.215 "state": "online", 00:11:25.215 "raid_level": "raid1", 00:11:25.215 "superblock": true, 00:11:25.215 "num_base_bdevs": 2, 00:11:25.215 "num_base_bdevs_discovered": 2, 00:11:25.215 "num_base_bdevs_operational": 2, 00:11:25.215 "base_bdevs_list": [ 00:11:25.215 { 00:11:25.215 "name": "BaseBdev1", 00:11:25.215 "uuid": "36b91413-5674-5991-9ea1-476af1a772b6", 00:11:25.215 "is_configured": true, 00:11:25.215 "data_offset": 2048, 00:11:25.215 "data_size": 63488 00:11:25.215 }, 00:11:25.215 { 00:11:25.215 "name": "BaseBdev2", 00:11:25.215 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:25.215 "is_configured": true, 00:11:25.215 "data_offset": 2048, 00:11:25.215 "data_size": 63488 00:11:25.215 } 00:11:25.215 ] 00:11:25.215 }' 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.215 13:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.781 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:25.782 [2024-11-20 13:25:07.152765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:25.782 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:26.041 [2024-11-20 13:25:07.511969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:26.041 /dev/nbd0 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.041 1+0 records in 00:11:26.041 1+0 records out 00:11:26.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557107 s, 7.4 MB/s 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:26.041 13:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:31.314 63488+0 records in 00:11:31.314 63488+0 records out 00:11:31.314 32505856 bytes (33 MB, 31 MiB) copied, 4.6261 s, 7.0 MB/s 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:31.314 [2024-11-20 13:25:12.426700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.314 [2024-11-20 13:25:12.462750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.314 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.314 "name": "raid_bdev1", 00:11:31.314 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:31.314 "strip_size_kb": 0, 00:11:31.314 "state": "online", 00:11:31.314 "raid_level": "raid1", 00:11:31.314 "superblock": true, 00:11:31.314 "num_base_bdevs": 2, 00:11:31.314 "num_base_bdevs_discovered": 1, 00:11:31.314 "num_base_bdevs_operational": 1, 00:11:31.314 "base_bdevs_list": [ 00:11:31.314 { 00:11:31.314 "name": null, 00:11:31.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.314 "is_configured": false, 00:11:31.314 "data_offset": 0, 00:11:31.314 "data_size": 63488 00:11:31.314 }, 00:11:31.314 { 00:11:31.314 "name": "BaseBdev2", 00:11:31.314 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:31.314 "is_configured": true, 00:11:31.314 "data_offset": 2048, 00:11:31.314 "data_size": 63488 00:11:31.314 } 00:11:31.314 ] 00:11:31.314 }' 00:11:31.315 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.315 13:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.315 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:31.315 13:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.315 13:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.315 [2024-11-20 13:25:12.945971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:31.315 [2024-11-20 13:25:12.964789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:11:31.315 13:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.315 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:31.315 [2024-11-20 13:25:12.967940] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:32.691 13:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:32.691 13:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.691 13:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:32.691 13:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:32.691 13:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.691 13:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.691 13:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.691 13:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.691 13:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.691 13:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.691 "name": "raid_bdev1", 00:11:32.691 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:32.691 "strip_size_kb": 0, 00:11:32.691 "state": "online", 00:11:32.691 "raid_level": "raid1", 00:11:32.691 "superblock": true, 00:11:32.691 "num_base_bdevs": 2, 00:11:32.691 "num_base_bdevs_discovered": 2, 00:11:32.691 "num_base_bdevs_operational": 2, 00:11:32.691 "process": { 00:11:32.691 "type": "rebuild", 00:11:32.691 "target": "spare", 00:11:32.691 "progress": { 00:11:32.691 "blocks": 20480, 00:11:32.691 "percent": 32 00:11:32.691 } 00:11:32.691 }, 00:11:32.691 "base_bdevs_list": [ 00:11:32.691 { 00:11:32.691 "name": "spare", 00:11:32.691 "uuid": "8bc78c0f-22c4-5326-b3e8-9078ba222ad0", 00:11:32.691 "is_configured": true, 00:11:32.691 "data_offset": 2048, 00:11:32.691 "data_size": 63488 00:11:32.691 }, 00:11:32.691 { 00:11:32.691 "name": "BaseBdev2", 00:11:32.691 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:32.691 "is_configured": true, 00:11:32.691 "data_offset": 2048, 00:11:32.691 "data_size": 63488 00:11:32.691 } 00:11:32.691 ] 00:11:32.691 }' 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.691 [2024-11-20 13:25:14.111427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:32.691 [2024-11-20 13:25:14.174452] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:32.691 [2024-11-20 13:25:14.174537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.691 [2024-11-20 13:25:14.174557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:32.691 [2024-11-20 13:25:14.174566] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.691 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.691 "name": "raid_bdev1", 00:11:32.691 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:32.691 "strip_size_kb": 0, 00:11:32.691 "state": "online", 00:11:32.691 "raid_level": "raid1", 00:11:32.691 "superblock": true, 00:11:32.692 "num_base_bdevs": 2, 00:11:32.692 "num_base_bdevs_discovered": 1, 00:11:32.692 "num_base_bdevs_operational": 1, 00:11:32.692 "base_bdevs_list": [ 00:11:32.692 { 00:11:32.692 "name": null, 00:11:32.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.692 "is_configured": false, 00:11:32.692 "data_offset": 0, 00:11:32.692 "data_size": 63488 00:11:32.692 }, 00:11:32.692 { 00:11:32.692 "name": "BaseBdev2", 00:11:32.692 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:32.692 "is_configured": true, 00:11:32.692 "data_offset": 2048, 00:11:32.692 "data_size": 63488 00:11:32.692 } 00:11:32.692 ] 00:11:32.692 }' 00:11:32.692 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.692 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:33.261 "name": "raid_bdev1", 00:11:33.261 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:33.261 "strip_size_kb": 0, 00:11:33.261 "state": "online", 00:11:33.261 "raid_level": "raid1", 00:11:33.261 "superblock": true, 00:11:33.261 "num_base_bdevs": 2, 00:11:33.261 "num_base_bdevs_discovered": 1, 00:11:33.261 "num_base_bdevs_operational": 1, 00:11:33.261 "base_bdevs_list": [ 00:11:33.261 { 00:11:33.261 "name": null, 00:11:33.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.261 "is_configured": false, 00:11:33.261 "data_offset": 0, 00:11:33.261 "data_size": 63488 00:11:33.261 }, 00:11:33.261 { 00:11:33.261 "name": "BaseBdev2", 00:11:33.261 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:33.261 "is_configured": true, 00:11:33.261 "data_offset": 2048, 00:11:33.261 "data_size": 63488 00:11:33.261 } 00:11:33.261 ] 00:11:33.261 }' 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.261 [2024-11-20 13:25:14.791310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:33.261 [2024-11-20 13:25:14.796507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.261 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:33.261 [2024-11-20 13:25:14.798649] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.199 "name": "raid_bdev1", 00:11:34.199 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:34.199 "strip_size_kb": 0, 00:11:34.199 "state": "online", 00:11:34.199 "raid_level": "raid1", 00:11:34.199 "superblock": true, 00:11:34.199 "num_base_bdevs": 2, 00:11:34.199 "num_base_bdevs_discovered": 2, 00:11:34.199 "num_base_bdevs_operational": 2, 00:11:34.199 "process": { 00:11:34.199 "type": "rebuild", 00:11:34.199 "target": "spare", 00:11:34.199 "progress": { 00:11:34.199 "blocks": 20480, 00:11:34.199 "percent": 32 00:11:34.199 } 00:11:34.199 }, 00:11:34.199 "base_bdevs_list": [ 00:11:34.199 { 00:11:34.199 "name": "spare", 00:11:34.199 "uuid": "8bc78c0f-22c4-5326-b3e8-9078ba222ad0", 00:11:34.199 "is_configured": true, 00:11:34.199 "data_offset": 2048, 00:11:34.199 "data_size": 63488 00:11:34.199 }, 00:11:34.199 { 00:11:34.199 "name": "BaseBdev2", 00:11:34.199 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:34.199 "is_configured": true, 00:11:34.199 "data_offset": 2048, 00:11:34.199 "data_size": 63488 00:11:34.199 } 00:11:34.199 ] 00:11:34.199 }' 00:11:34.199 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:34.458 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=304 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.458 "name": "raid_bdev1", 00:11:34.458 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:34.458 "strip_size_kb": 0, 00:11:34.458 "state": "online", 00:11:34.458 "raid_level": "raid1", 00:11:34.458 "superblock": true, 00:11:34.458 "num_base_bdevs": 2, 00:11:34.458 "num_base_bdevs_discovered": 2, 00:11:34.458 "num_base_bdevs_operational": 2, 00:11:34.458 "process": { 00:11:34.458 "type": "rebuild", 00:11:34.458 "target": "spare", 00:11:34.458 "progress": { 00:11:34.458 "blocks": 22528, 00:11:34.458 "percent": 35 00:11:34.458 } 00:11:34.458 }, 00:11:34.458 "base_bdevs_list": [ 00:11:34.458 { 00:11:34.458 "name": "spare", 00:11:34.458 "uuid": "8bc78c0f-22c4-5326-b3e8-9078ba222ad0", 00:11:34.458 "is_configured": true, 00:11:34.458 "data_offset": 2048, 00:11:34.458 "data_size": 63488 00:11:34.458 }, 00:11:34.458 { 00:11:34.458 "name": "BaseBdev2", 00:11:34.458 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:34.458 "is_configured": true, 00:11:34.458 "data_offset": 2048, 00:11:34.458 "data_size": 63488 00:11:34.458 } 00:11:34.458 ] 00:11:34.458 }' 00:11:34.458 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.458 13:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:34.458 13:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.458 13:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:34.458 13:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:35.835 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:35.835 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:35.835 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:35.836 "name": "raid_bdev1", 00:11:35.836 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:35.836 "strip_size_kb": 0, 00:11:35.836 "state": "online", 00:11:35.836 "raid_level": "raid1", 00:11:35.836 "superblock": true, 00:11:35.836 "num_base_bdevs": 2, 00:11:35.836 "num_base_bdevs_discovered": 2, 00:11:35.836 "num_base_bdevs_operational": 2, 00:11:35.836 "process": { 00:11:35.836 "type": "rebuild", 00:11:35.836 "target": "spare", 00:11:35.836 "progress": { 00:11:35.836 "blocks": 45056, 00:11:35.836 "percent": 70 00:11:35.836 } 00:11:35.836 }, 00:11:35.836 "base_bdevs_list": [ 00:11:35.836 { 00:11:35.836 "name": "spare", 00:11:35.836 "uuid": "8bc78c0f-22c4-5326-b3e8-9078ba222ad0", 00:11:35.836 "is_configured": true, 00:11:35.836 "data_offset": 2048, 00:11:35.836 "data_size": 63488 00:11:35.836 }, 00:11:35.836 { 00:11:35.836 "name": "BaseBdev2", 00:11:35.836 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:35.836 "is_configured": true, 00:11:35.836 "data_offset": 2048, 00:11:35.836 "data_size": 63488 00:11:35.836 } 00:11:35.836 ] 00:11:35.836 }' 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:35.836 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:36.402 [2024-11-20 13:25:17.912552] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:36.402 [2024-11-20 13:25:17.912736] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:36.402 [2024-11-20 13:25:17.912917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.662 "name": "raid_bdev1", 00:11:36.662 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:36.662 "strip_size_kb": 0, 00:11:36.662 "state": "online", 00:11:36.662 "raid_level": "raid1", 00:11:36.662 "superblock": true, 00:11:36.662 "num_base_bdevs": 2, 00:11:36.662 "num_base_bdevs_discovered": 2, 00:11:36.662 "num_base_bdevs_operational": 2, 00:11:36.662 "base_bdevs_list": [ 00:11:36.662 { 00:11:36.662 "name": "spare", 00:11:36.662 "uuid": "8bc78c0f-22c4-5326-b3e8-9078ba222ad0", 00:11:36.662 "is_configured": true, 00:11:36.662 "data_offset": 2048, 00:11:36.662 "data_size": 63488 00:11:36.662 }, 00:11:36.662 { 00:11:36.662 "name": "BaseBdev2", 00:11:36.662 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:36.662 "is_configured": true, 00:11:36.662 "data_offset": 2048, 00:11:36.662 "data_size": 63488 00:11:36.662 } 00:11:36.662 ] 00:11:36.662 }' 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:36.662 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.922 "name": "raid_bdev1", 00:11:36.922 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:36.922 "strip_size_kb": 0, 00:11:36.922 "state": "online", 00:11:36.922 "raid_level": "raid1", 00:11:36.922 "superblock": true, 00:11:36.922 "num_base_bdevs": 2, 00:11:36.922 "num_base_bdevs_discovered": 2, 00:11:36.922 "num_base_bdevs_operational": 2, 00:11:36.922 "base_bdevs_list": [ 00:11:36.922 { 00:11:36.922 "name": "spare", 00:11:36.922 "uuid": "8bc78c0f-22c4-5326-b3e8-9078ba222ad0", 00:11:36.922 "is_configured": true, 00:11:36.922 "data_offset": 2048, 00:11:36.922 "data_size": 63488 00:11:36.922 }, 00:11:36.922 { 00:11:36.922 "name": "BaseBdev2", 00:11:36.922 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:36.922 "is_configured": true, 00:11:36.922 "data_offset": 2048, 00:11:36.922 "data_size": 63488 00:11:36.922 } 00:11:36.922 ] 00:11:36.922 }' 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.922 "name": "raid_bdev1", 00:11:36.922 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:36.922 "strip_size_kb": 0, 00:11:36.922 "state": "online", 00:11:36.922 "raid_level": "raid1", 00:11:36.922 "superblock": true, 00:11:36.922 "num_base_bdevs": 2, 00:11:36.922 "num_base_bdevs_discovered": 2, 00:11:36.922 "num_base_bdevs_operational": 2, 00:11:36.922 "base_bdevs_list": [ 00:11:36.922 { 00:11:36.922 "name": "spare", 00:11:36.922 "uuid": "8bc78c0f-22c4-5326-b3e8-9078ba222ad0", 00:11:36.922 "is_configured": true, 00:11:36.922 "data_offset": 2048, 00:11:36.922 "data_size": 63488 00:11:36.922 }, 00:11:36.922 { 00:11:36.922 "name": "BaseBdev2", 00:11:36.922 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:36.922 "is_configured": true, 00:11:36.922 "data_offset": 2048, 00:11:36.922 "data_size": 63488 00:11:36.922 } 00:11:36.922 ] 00:11:36.922 }' 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.922 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.492 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:37.492 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.492 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.492 [2024-11-20 13:25:18.964176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.492 [2024-11-20 13:25:18.964334] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.492 [2024-11-20 13:25:18.964500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.492 [2024-11-20 13:25:18.964646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.492 [2024-11-20 13:25:18.964712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:37.492 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.492 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.492 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:37.492 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.492 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.492 13:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:37.492 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:37.760 /dev/nbd0 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.760 1+0 records in 00:11:37.760 1+0 records out 00:11:37.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027932 s, 14.7 MB/s 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:37.760 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:38.019 /dev/nbd1 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.019 1+0 records in 00:11:38.019 1+0 records out 00:11:38.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303544 s, 13.5 MB/s 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.019 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:38.279 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:38.279 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:38.279 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:38.279 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.279 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.279 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:38.279 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:38.279 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.279 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.279 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.538 [2024-11-20 13:25:20.118359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:38.538 [2024-11-20 13:25:20.118472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.538 [2024-11-20 13:25:20.118497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:38.538 [2024-11-20 13:25:20.118510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.538 [2024-11-20 13:25:20.120879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.538 [2024-11-20 13:25:20.120926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:38.538 [2024-11-20 13:25:20.121035] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:38.538 [2024-11-20 13:25:20.121089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:38.538 [2024-11-20 13:25:20.121219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.538 spare 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.538 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.797 [2024-11-20 13:25:20.221131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:38.797 [2024-11-20 13:25:20.221221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:38.797 [2024-11-20 13:25:20.221562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:11:38.797 [2024-11-20 13:25:20.221734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:38.797 [2024-11-20 13:25:20.221746] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:38.797 [2024-11-20 13:25:20.221886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.797 "name": "raid_bdev1", 00:11:38.797 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:38.797 "strip_size_kb": 0, 00:11:38.797 "state": "online", 00:11:38.797 "raid_level": "raid1", 00:11:38.797 "superblock": true, 00:11:38.797 "num_base_bdevs": 2, 00:11:38.797 "num_base_bdevs_discovered": 2, 00:11:38.797 "num_base_bdevs_operational": 2, 00:11:38.797 "base_bdevs_list": [ 00:11:38.797 { 00:11:38.797 "name": "spare", 00:11:38.797 "uuid": "8bc78c0f-22c4-5326-b3e8-9078ba222ad0", 00:11:38.797 "is_configured": true, 00:11:38.797 "data_offset": 2048, 00:11:38.797 "data_size": 63488 00:11:38.797 }, 00:11:38.797 { 00:11:38.797 "name": "BaseBdev2", 00:11:38.797 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:38.797 "is_configured": true, 00:11:38.797 "data_offset": 2048, 00:11:38.797 "data_size": 63488 00:11:38.797 } 00:11:38.797 ] 00:11:38.797 }' 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.797 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.055 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:39.055 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.055 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:39.055 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:39.055 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.055 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.055 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.055 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.055 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.313 "name": "raid_bdev1", 00:11:39.313 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:39.313 "strip_size_kb": 0, 00:11:39.313 "state": "online", 00:11:39.313 "raid_level": "raid1", 00:11:39.313 "superblock": true, 00:11:39.313 "num_base_bdevs": 2, 00:11:39.313 "num_base_bdevs_discovered": 2, 00:11:39.313 "num_base_bdevs_operational": 2, 00:11:39.313 "base_bdevs_list": [ 00:11:39.313 { 00:11:39.313 "name": "spare", 00:11:39.313 "uuid": "8bc78c0f-22c4-5326-b3e8-9078ba222ad0", 00:11:39.313 "is_configured": true, 00:11:39.313 "data_offset": 2048, 00:11:39.313 "data_size": 63488 00:11:39.313 }, 00:11:39.313 { 00:11:39.313 "name": "BaseBdev2", 00:11:39.313 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:39.313 "is_configured": true, 00:11:39.313 "data_offset": 2048, 00:11:39.313 "data_size": 63488 00:11:39.313 } 00:11:39.313 ] 00:11:39.313 }' 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.313 [2024-11-20 13:25:20.929192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.313 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.314 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.314 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:39.314 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.314 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.314 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.314 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.314 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.314 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.314 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.314 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.314 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.572 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.572 "name": "raid_bdev1", 00:11:39.572 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:39.572 "strip_size_kb": 0, 00:11:39.572 "state": "online", 00:11:39.572 "raid_level": "raid1", 00:11:39.572 "superblock": true, 00:11:39.572 "num_base_bdevs": 2, 00:11:39.572 "num_base_bdevs_discovered": 1, 00:11:39.572 "num_base_bdevs_operational": 1, 00:11:39.572 "base_bdevs_list": [ 00:11:39.572 { 00:11:39.572 "name": null, 00:11:39.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.572 "is_configured": false, 00:11:39.572 "data_offset": 0, 00:11:39.572 "data_size": 63488 00:11:39.572 }, 00:11:39.572 { 00:11:39.572 "name": "BaseBdev2", 00:11:39.572 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:39.572 "is_configured": true, 00:11:39.572 "data_offset": 2048, 00:11:39.572 "data_size": 63488 00:11:39.572 } 00:11:39.572 ] 00:11:39.572 }' 00:11:39.572 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.572 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.831 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:39.831 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.831 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.831 [2024-11-20 13:25:21.408312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:39.832 [2024-11-20 13:25:21.408624] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:39.832 [2024-11-20 13:25:21.408695] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:39.832 [2024-11-20 13:25:21.408782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:39.832 [2024-11-20 13:25:21.413730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:11:39.832 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.832 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:39.832 [2024-11-20 13:25:21.415892] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:40.769 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.769 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.769 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.769 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.769 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.769 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.769 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.769 13:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.769 13:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.028 13:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.028 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.028 "name": "raid_bdev1", 00:11:41.028 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:41.028 "strip_size_kb": 0, 00:11:41.028 "state": "online", 00:11:41.028 "raid_level": "raid1", 00:11:41.028 "superblock": true, 00:11:41.028 "num_base_bdevs": 2, 00:11:41.028 "num_base_bdevs_discovered": 2, 00:11:41.028 "num_base_bdevs_operational": 2, 00:11:41.028 "process": { 00:11:41.028 "type": "rebuild", 00:11:41.028 "target": "spare", 00:11:41.028 "progress": { 00:11:41.028 "blocks": 20480, 00:11:41.028 "percent": 32 00:11:41.028 } 00:11:41.028 }, 00:11:41.028 "base_bdevs_list": [ 00:11:41.028 { 00:11:41.028 "name": "spare", 00:11:41.028 "uuid": "8bc78c0f-22c4-5326-b3e8-9078ba222ad0", 00:11:41.028 "is_configured": true, 00:11:41.028 "data_offset": 2048, 00:11:41.028 "data_size": 63488 00:11:41.028 }, 00:11:41.028 { 00:11:41.028 "name": "BaseBdev2", 00:11:41.028 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:41.028 "is_configured": true, 00:11:41.028 "data_offset": 2048, 00:11:41.028 "data_size": 63488 00:11:41.028 } 00:11:41.028 ] 00:11:41.028 }' 00:11:41.028 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.028 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.028 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.028 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.028 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.029 [2024-11-20 13:25:22.580246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:41.029 [2024-11-20 13:25:22.621203] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:41.029 [2024-11-20 13:25:22.621270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.029 [2024-11-20 13:25:22.621304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:41.029 [2024-11-20 13:25:22.621311] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.029 "name": "raid_bdev1", 00:11:41.029 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:41.029 "strip_size_kb": 0, 00:11:41.029 "state": "online", 00:11:41.029 "raid_level": "raid1", 00:11:41.029 "superblock": true, 00:11:41.029 "num_base_bdevs": 2, 00:11:41.029 "num_base_bdevs_discovered": 1, 00:11:41.029 "num_base_bdevs_operational": 1, 00:11:41.029 "base_bdevs_list": [ 00:11:41.029 { 00:11:41.029 "name": null, 00:11:41.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.029 "is_configured": false, 00:11:41.029 "data_offset": 0, 00:11:41.029 "data_size": 63488 00:11:41.029 }, 00:11:41.029 { 00:11:41.029 "name": "BaseBdev2", 00:11:41.029 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:41.029 "is_configured": true, 00:11:41.029 "data_offset": 2048, 00:11:41.029 "data_size": 63488 00:11:41.029 } 00:11:41.029 ] 00:11:41.029 }' 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.029 13:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.597 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:41.597 13:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.597 13:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.597 [2024-11-20 13:25:23.069452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:41.597 [2024-11-20 13:25:23.069603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.597 [2024-11-20 13:25:23.069652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:41.597 [2024-11-20 13:25:23.069688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.597 [2024-11-20 13:25:23.070201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.597 [2024-11-20 13:25:23.070267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:41.597 [2024-11-20 13:25:23.070406] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:41.597 [2024-11-20 13:25:23.070451] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:41.597 [2024-11-20 13:25:23.070525] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:41.597 [2024-11-20 13:25:23.070598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:41.597 [2024-11-20 13:25:23.075675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:11:41.597 spare 00:11:41.597 13:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.597 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:41.597 [2024-11-20 13:25:23.077899] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.533 "name": "raid_bdev1", 00:11:42.533 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:42.533 "strip_size_kb": 0, 00:11:42.533 "state": "online", 00:11:42.533 "raid_level": "raid1", 00:11:42.533 "superblock": true, 00:11:42.533 "num_base_bdevs": 2, 00:11:42.533 "num_base_bdevs_discovered": 2, 00:11:42.533 "num_base_bdevs_operational": 2, 00:11:42.533 "process": { 00:11:42.533 "type": "rebuild", 00:11:42.533 "target": "spare", 00:11:42.533 "progress": { 00:11:42.533 "blocks": 20480, 00:11:42.533 "percent": 32 00:11:42.533 } 00:11:42.533 }, 00:11:42.533 "base_bdevs_list": [ 00:11:42.533 { 00:11:42.533 "name": "spare", 00:11:42.533 "uuid": "8bc78c0f-22c4-5326-b3e8-9078ba222ad0", 00:11:42.533 "is_configured": true, 00:11:42.533 "data_offset": 2048, 00:11:42.533 "data_size": 63488 00:11:42.533 }, 00:11:42.533 { 00:11:42.533 "name": "BaseBdev2", 00:11:42.533 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:42.533 "is_configured": true, 00:11:42.533 "data_offset": 2048, 00:11:42.533 "data_size": 63488 00:11:42.533 } 00:11:42.533 ] 00:11:42.533 }' 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:42.533 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.792 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:42.792 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:42.792 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.792 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.792 [2024-11-20 13:25:24.242371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:42.792 [2024-11-20 13:25:24.283307] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:42.793 [2024-11-20 13:25:24.283383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.793 [2024-11-20 13:25:24.283401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:42.793 [2024-11-20 13:25:24.283411] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.793 "name": "raid_bdev1", 00:11:42.793 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:42.793 "strip_size_kb": 0, 00:11:42.793 "state": "online", 00:11:42.793 "raid_level": "raid1", 00:11:42.793 "superblock": true, 00:11:42.793 "num_base_bdevs": 2, 00:11:42.793 "num_base_bdevs_discovered": 1, 00:11:42.793 "num_base_bdevs_operational": 1, 00:11:42.793 "base_bdevs_list": [ 00:11:42.793 { 00:11:42.793 "name": null, 00:11:42.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.793 "is_configured": false, 00:11:42.793 "data_offset": 0, 00:11:42.793 "data_size": 63488 00:11:42.793 }, 00:11:42.793 { 00:11:42.793 "name": "BaseBdev2", 00:11:42.793 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:42.793 "is_configured": true, 00:11:42.793 "data_offset": 2048, 00:11:42.793 "data_size": 63488 00:11:42.793 } 00:11:42.793 ] 00:11:42.793 }' 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.793 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.052 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:43.052 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.052 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:43.052 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:43.052 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.052 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.052 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.052 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.052 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.052 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.312 "name": "raid_bdev1", 00:11:43.312 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:43.312 "strip_size_kb": 0, 00:11:43.312 "state": "online", 00:11:43.312 "raid_level": "raid1", 00:11:43.312 "superblock": true, 00:11:43.312 "num_base_bdevs": 2, 00:11:43.312 "num_base_bdevs_discovered": 1, 00:11:43.312 "num_base_bdevs_operational": 1, 00:11:43.312 "base_bdevs_list": [ 00:11:43.312 { 00:11:43.312 "name": null, 00:11:43.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.312 "is_configured": false, 00:11:43.312 "data_offset": 0, 00:11:43.312 "data_size": 63488 00:11:43.312 }, 00:11:43.312 { 00:11:43.312 "name": "BaseBdev2", 00:11:43.312 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:43.312 "is_configured": true, 00:11:43.312 "data_offset": 2048, 00:11:43.312 "data_size": 63488 00:11:43.312 } 00:11:43.312 ] 00:11:43.312 }' 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.312 [2024-11-20 13:25:24.855548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:43.312 [2024-11-20 13:25:24.855631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.312 [2024-11-20 13:25:24.855654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:43.312 [2024-11-20 13:25:24.855666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.312 [2024-11-20 13:25:24.856138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.312 [2024-11-20 13:25:24.856161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:43.312 [2024-11-20 13:25:24.856244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:43.312 [2024-11-20 13:25:24.856275] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:43.312 [2024-11-20 13:25:24.856301] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:43.312 [2024-11-20 13:25:24.856325] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:43.312 BaseBdev1 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.312 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.250 "name": "raid_bdev1", 00:11:44.250 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:44.250 "strip_size_kb": 0, 00:11:44.250 "state": "online", 00:11:44.250 "raid_level": "raid1", 00:11:44.250 "superblock": true, 00:11:44.250 "num_base_bdevs": 2, 00:11:44.250 "num_base_bdevs_discovered": 1, 00:11:44.250 "num_base_bdevs_operational": 1, 00:11:44.250 "base_bdevs_list": [ 00:11:44.250 { 00:11:44.250 "name": null, 00:11:44.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.250 "is_configured": false, 00:11:44.250 "data_offset": 0, 00:11:44.250 "data_size": 63488 00:11:44.250 }, 00:11:44.250 { 00:11:44.250 "name": "BaseBdev2", 00:11:44.250 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:44.250 "is_configured": true, 00:11:44.250 "data_offset": 2048, 00:11:44.250 "data_size": 63488 00:11:44.250 } 00:11:44.250 ] 00:11:44.250 }' 00:11:44.250 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.508 13:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.767 "name": "raid_bdev1", 00:11:44.767 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:44.767 "strip_size_kb": 0, 00:11:44.767 "state": "online", 00:11:44.767 "raid_level": "raid1", 00:11:44.767 "superblock": true, 00:11:44.767 "num_base_bdevs": 2, 00:11:44.767 "num_base_bdevs_discovered": 1, 00:11:44.767 "num_base_bdevs_operational": 1, 00:11:44.767 "base_bdevs_list": [ 00:11:44.767 { 00:11:44.767 "name": null, 00:11:44.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.767 "is_configured": false, 00:11:44.767 "data_offset": 0, 00:11:44.767 "data_size": 63488 00:11:44.767 }, 00:11:44.767 { 00:11:44.767 "name": "BaseBdev2", 00:11:44.767 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:44.767 "is_configured": true, 00:11:44.767 "data_offset": 2048, 00:11:44.767 "data_size": 63488 00:11:44.767 } 00:11:44.767 ] 00:11:44.767 }' 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:44.767 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.025 [2024-11-20 13:25:26.484853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.025 [2024-11-20 13:25:26.485102] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:45.025 [2024-11-20 13:25:26.485180] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:45.025 request: 00:11:45.025 { 00:11:45.025 "base_bdev": "BaseBdev1", 00:11:45.025 "raid_bdev": "raid_bdev1", 00:11:45.025 "method": "bdev_raid_add_base_bdev", 00:11:45.025 "req_id": 1 00:11:45.025 } 00:11:45.025 Got JSON-RPC error response 00:11:45.025 response: 00:11:45.025 { 00:11:45.025 "code": -22, 00:11:45.025 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:45.025 } 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:45.025 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:45.026 13:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:45.026 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.966 13:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.967 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.967 "name": "raid_bdev1", 00:11:45.967 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:45.967 "strip_size_kb": 0, 00:11:45.967 "state": "online", 00:11:45.967 "raid_level": "raid1", 00:11:45.967 "superblock": true, 00:11:45.967 "num_base_bdevs": 2, 00:11:45.967 "num_base_bdevs_discovered": 1, 00:11:45.967 "num_base_bdevs_operational": 1, 00:11:45.967 "base_bdevs_list": [ 00:11:45.967 { 00:11:45.967 "name": null, 00:11:45.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.967 "is_configured": false, 00:11:45.967 "data_offset": 0, 00:11:45.967 "data_size": 63488 00:11:45.967 }, 00:11:45.967 { 00:11:45.967 "name": "BaseBdev2", 00:11:45.967 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:45.967 "is_configured": true, 00:11:45.967 "data_offset": 2048, 00:11:45.967 "data_size": 63488 00:11:45.967 } 00:11:45.967 ] 00:11:45.967 }' 00:11:45.967 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.967 13:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.536 "name": "raid_bdev1", 00:11:46.536 "uuid": "bee1678f-78f0-4257-8831-4b99d43dc07f", 00:11:46.536 "strip_size_kb": 0, 00:11:46.536 "state": "online", 00:11:46.536 "raid_level": "raid1", 00:11:46.536 "superblock": true, 00:11:46.536 "num_base_bdevs": 2, 00:11:46.536 "num_base_bdevs_discovered": 1, 00:11:46.536 "num_base_bdevs_operational": 1, 00:11:46.536 "base_bdevs_list": [ 00:11:46.536 { 00:11:46.536 "name": null, 00:11:46.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.536 "is_configured": false, 00:11:46.536 "data_offset": 0, 00:11:46.536 "data_size": 63488 00:11:46.536 }, 00:11:46.536 { 00:11:46.536 "name": "BaseBdev2", 00:11:46.536 "uuid": "a3cbd3e3-7c83-52bc-8090-53ce996c5e88", 00:11:46.536 "is_configured": true, 00:11:46.536 "data_offset": 2048, 00:11:46.536 "data_size": 63488 00:11:46.536 } 00:11:46.536 ] 00:11:46.536 }' 00:11:46.536 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.536 13:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:46.536 13:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.536 13:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:46.536 13:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86108 00:11:46.536 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86108 ']' 00:11:46.536 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 86108 00:11:46.536 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:46.536 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.536 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86108 00:11:46.536 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.536 killing process with pid 86108 00:11:46.536 Received shutdown signal, test time was about 60.000000 seconds 00:11:46.536 00:11:46.536 Latency(us) 00:11:46.536 [2024-11-20T13:25:28.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.536 [2024-11-20T13:25:28.204Z] =================================================================================================================== 00:11:46.536 [2024-11-20T13:25:28.204Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:46.537 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.537 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86108' 00:11:46.537 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 86108 00:11:46.537 [2024-11-20 13:25:28.123433] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.537 [2024-11-20 13:25:28.123584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.537 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 86108 00:11:46.537 [2024-11-20 13:25:28.123655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.537 [2024-11-20 13:25:28.123666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:46.537 [2024-11-20 13:25:28.155483] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.796 13:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:46.796 00:11:46.796 real 0m22.837s 00:11:46.796 user 0m28.094s 00:11:46.796 sys 0m3.829s 00:11:46.796 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.796 13:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.796 ************************************ 00:11:46.796 END TEST raid_rebuild_test_sb 00:11:46.796 ************************************ 00:11:46.796 13:25:28 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:46.796 13:25:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:46.796 13:25:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.796 13:25:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.796 ************************************ 00:11:46.796 START TEST raid_rebuild_test_io 00:11:46.796 ************************************ 00:11:46.796 13:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:11:46.796 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:46.796 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:46.796 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:46.796 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:46.796 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:46.796 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=86839 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 86839 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 86839 ']' 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.797 13:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.056 [2024-11-20 13:25:28.534405] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:47.056 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:47.056 Zero copy mechanism will not be used. 00:11:47.056 [2024-11-20 13:25:28.535011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86839 ] 00:11:47.056 [2024-11-20 13:25:28.669552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.056 [2024-11-20 13:25:28.695059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.314 [2024-11-20 13:25:28.738039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.314 [2024-11-20 13:25:28.738083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.881 BaseBdev1_malloc 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.881 [2024-11-20 13:25:29.437205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:47.881 [2024-11-20 13:25:29.437268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.881 [2024-11-20 13:25:29.437296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:47.881 [2024-11-20 13:25:29.437309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.881 [2024-11-20 13:25:29.439631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.881 [2024-11-20 13:25:29.439736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:47.881 BaseBdev1 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.881 BaseBdev2_malloc 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:47.881 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.882 [2024-11-20 13:25:29.466416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:47.882 [2024-11-20 13:25:29.466474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.882 [2024-11-20 13:25:29.466499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:47.882 [2024-11-20 13:25:29.466509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.882 [2024-11-20 13:25:29.468908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.882 [2024-11-20 13:25:29.468957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:47.882 BaseBdev2 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.882 spare_malloc 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.882 spare_delay 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.882 [2024-11-20 13:25:29.507619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:47.882 [2024-11-20 13:25:29.507683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.882 [2024-11-20 13:25:29.507710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:47.882 [2024-11-20 13:25:29.507720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.882 [2024-11-20 13:25:29.510138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.882 [2024-11-20 13:25:29.510176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:47.882 spare 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.882 [2024-11-20 13:25:29.519648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.882 [2024-11-20 13:25:29.521862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.882 [2024-11-20 13:25:29.522048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:47.882 [2024-11-20 13:25:29.522065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:47.882 [2024-11-20 13:25:29.522395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:47.882 [2024-11-20 13:25:29.522557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:47.882 [2024-11-20 13:25:29.522575] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:47.882 [2024-11-20 13:25:29.522746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.882 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.141 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.141 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.141 "name": "raid_bdev1", 00:11:48.141 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:48.141 "strip_size_kb": 0, 00:11:48.141 "state": "online", 00:11:48.141 "raid_level": "raid1", 00:11:48.141 "superblock": false, 00:11:48.141 "num_base_bdevs": 2, 00:11:48.141 "num_base_bdevs_discovered": 2, 00:11:48.141 "num_base_bdevs_operational": 2, 00:11:48.141 "base_bdevs_list": [ 00:11:48.141 { 00:11:48.141 "name": "BaseBdev1", 00:11:48.141 "uuid": "fa667454-ca4d-5e8d-8d24-873df84aa836", 00:11:48.141 "is_configured": true, 00:11:48.141 "data_offset": 0, 00:11:48.141 "data_size": 65536 00:11:48.141 }, 00:11:48.141 { 00:11:48.141 "name": "BaseBdev2", 00:11:48.141 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:48.141 "is_configured": true, 00:11:48.141 "data_offset": 0, 00:11:48.141 "data_size": 65536 00:11:48.141 } 00:11:48.141 ] 00:11:48.141 }' 00:11:48.141 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.141 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.400 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:48.400 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.400 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.400 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.400 [2024-11-20 13:25:29.955247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.400 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.400 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:48.400 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.400 13:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:48.400 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.400 13:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.400 [2024-11-20 13:25:30.026792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.400 13:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.658 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.658 "name": "raid_bdev1", 00:11:48.658 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:48.658 "strip_size_kb": 0, 00:11:48.658 "state": "online", 00:11:48.658 "raid_level": "raid1", 00:11:48.658 "superblock": false, 00:11:48.658 "num_base_bdevs": 2, 00:11:48.658 "num_base_bdevs_discovered": 1, 00:11:48.658 "num_base_bdevs_operational": 1, 00:11:48.658 "base_bdevs_list": [ 00:11:48.658 { 00:11:48.658 "name": null, 00:11:48.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.658 "is_configured": false, 00:11:48.658 "data_offset": 0, 00:11:48.658 "data_size": 65536 00:11:48.658 }, 00:11:48.658 { 00:11:48.658 "name": "BaseBdev2", 00:11:48.658 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:48.658 "is_configured": true, 00:11:48.658 "data_offset": 0, 00:11:48.658 "data_size": 65536 00:11:48.658 } 00:11:48.658 ] 00:11:48.658 }' 00:11:48.658 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.659 13:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.659 [2024-11-20 13:25:30.120804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:48.659 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:48.659 Zero copy mechanism will not be used. 00:11:48.659 Running I/O for 60 seconds... 00:11:48.917 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:48.917 13:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.917 13:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.917 [2024-11-20 13:25:30.496370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:48.917 13:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.917 13:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:48.917 [2024-11-20 13:25:30.550083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:48.917 [2024-11-20 13:25:30.552407] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:49.175 [2024-11-20 13:25:30.668821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:49.175 [2024-11-20 13:25:30.669476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:49.175 [2024-11-20 13:25:30.780442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:49.175 [2024-11-20 13:25:30.780809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:49.741 [2024-11-20 13:25:31.110980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:49.741 [2024-11-20 13:25:31.111602] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:49.741 139.00 IOPS, 417.00 MiB/s [2024-11-20T13:25:31.409Z] [2024-11-20 13:25:31.230095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:49.741 [2024-11-20 13:25:31.230427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.999 [2024-11-20 13:25:31.569425] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.999 "name": "raid_bdev1", 00:11:49.999 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:49.999 "strip_size_kb": 0, 00:11:49.999 "state": "online", 00:11:49.999 "raid_level": "raid1", 00:11:49.999 "superblock": false, 00:11:49.999 "num_base_bdevs": 2, 00:11:49.999 "num_base_bdevs_discovered": 2, 00:11:49.999 "num_base_bdevs_operational": 2, 00:11:49.999 "process": { 00:11:49.999 "type": "rebuild", 00:11:49.999 "target": "spare", 00:11:49.999 "progress": { 00:11:49.999 "blocks": 12288, 00:11:49.999 "percent": 18 00:11:49.999 } 00:11:49.999 }, 00:11:49.999 "base_bdevs_list": [ 00:11:49.999 { 00:11:49.999 "name": "spare", 00:11:49.999 "uuid": "0a05c8cb-0786-5883-8f70-fd7fb1de8dd2", 00:11:49.999 "is_configured": true, 00:11:49.999 "data_offset": 0, 00:11:49.999 "data_size": 65536 00:11:49.999 }, 00:11:49.999 { 00:11:49.999 "name": "BaseBdev2", 00:11:49.999 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:49.999 "is_configured": true, 00:11:49.999 "data_offset": 0, 00:11:49.999 "data_size": 65536 00:11:49.999 } 00:11:49.999 ] 00:11:49.999 }' 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.999 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.312 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.312 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:50.312 13:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.312 13:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.312 [2024-11-20 13:25:31.710589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.312 [2024-11-20 13:25:31.806370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:50.312 [2024-11-20 13:25:31.806823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:50.312 [2024-11-20 13:25:31.915062] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:50.312 [2024-11-20 13:25:31.930537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.312 [2024-11-20 13:25:31.930716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.312 [2024-11-20 13:25:31.930744] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:50.312 [2024-11-20 13:25:31.957618] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.571 13:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.571 13:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.571 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.571 "name": "raid_bdev1", 00:11:50.571 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:50.571 "strip_size_kb": 0, 00:11:50.571 "state": "online", 00:11:50.571 "raid_level": "raid1", 00:11:50.571 "superblock": false, 00:11:50.571 "num_base_bdevs": 2, 00:11:50.571 "num_base_bdevs_discovered": 1, 00:11:50.571 "num_base_bdevs_operational": 1, 00:11:50.571 "base_bdevs_list": [ 00:11:50.571 { 00:11:50.571 "name": null, 00:11:50.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.571 "is_configured": false, 00:11:50.571 "data_offset": 0, 00:11:50.571 "data_size": 65536 00:11:50.571 }, 00:11:50.571 { 00:11:50.571 "name": "BaseBdev2", 00:11:50.571 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:50.571 "is_configured": true, 00:11:50.571 "data_offset": 0, 00:11:50.571 "data_size": 65536 00:11:50.571 } 00:11:50.571 ] 00:11:50.571 }' 00:11:50.571 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.571 13:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.830 117.50 IOPS, 352.50 MiB/s [2024-11-20T13:25:32.498Z] 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:50.830 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.830 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:50.830 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:50.830 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.830 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.830 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.830 13:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.830 13:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.830 13:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.090 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.090 "name": "raid_bdev1", 00:11:51.090 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:51.090 "strip_size_kb": 0, 00:11:51.090 "state": "online", 00:11:51.090 "raid_level": "raid1", 00:11:51.090 "superblock": false, 00:11:51.090 "num_base_bdevs": 2, 00:11:51.090 "num_base_bdevs_discovered": 1, 00:11:51.090 "num_base_bdevs_operational": 1, 00:11:51.090 "base_bdevs_list": [ 00:11:51.090 { 00:11:51.090 "name": null, 00:11:51.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.090 "is_configured": false, 00:11:51.090 "data_offset": 0, 00:11:51.090 "data_size": 65536 00:11:51.090 }, 00:11:51.090 { 00:11:51.090 "name": "BaseBdev2", 00:11:51.090 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:51.090 "is_configured": true, 00:11:51.090 "data_offset": 0, 00:11:51.090 "data_size": 65536 00:11:51.090 } 00:11:51.090 ] 00:11:51.090 }' 00:11:51.090 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.090 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:51.090 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.090 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:51.090 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:51.090 13:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.090 13:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.090 [2024-11-20 13:25:32.598115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:51.090 13:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.090 13:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:51.090 [2024-11-20 13:25:32.664399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:51.090 [2024-11-20 13:25:32.666758] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:51.349 [2024-11-20 13:25:32.782312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:51.349 [2024-11-20 13:25:32.782876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:51.349 [2024-11-20 13:25:32.990160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:51.349 [2024-11-20 13:25:32.990467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:51.868 129.33 IOPS, 388.00 MiB/s [2024-11-20T13:25:33.536Z] [2024-11-20 13:25:33.328853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:51.868 [2024-11-20 13:25:33.329368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:51.868 [2024-11-20 13:25:33.443357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.127 "name": "raid_bdev1", 00:11:52.127 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:52.127 "strip_size_kb": 0, 00:11:52.127 "state": "online", 00:11:52.127 "raid_level": "raid1", 00:11:52.127 "superblock": false, 00:11:52.127 "num_base_bdevs": 2, 00:11:52.127 "num_base_bdevs_discovered": 2, 00:11:52.127 "num_base_bdevs_operational": 2, 00:11:52.127 "process": { 00:11:52.127 "type": "rebuild", 00:11:52.127 "target": "spare", 00:11:52.127 "progress": { 00:11:52.127 "blocks": 10240, 00:11:52.127 "percent": 15 00:11:52.127 } 00:11:52.127 }, 00:11:52.127 "base_bdevs_list": [ 00:11:52.127 { 00:11:52.127 "name": "spare", 00:11:52.127 "uuid": "0a05c8cb-0786-5883-8f70-fd7fb1de8dd2", 00:11:52.127 "is_configured": true, 00:11:52.127 "data_offset": 0, 00:11:52.127 "data_size": 65536 00:11:52.127 }, 00:11:52.127 { 00:11:52.127 "name": "BaseBdev2", 00:11:52.127 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:52.127 "is_configured": true, 00:11:52.127 "data_offset": 0, 00:11:52.127 "data_size": 65536 00:11:52.127 } 00:11:52.127 ] 00:11:52.127 }' 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=322 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.127 13:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.385 13:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.385 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.385 "name": "raid_bdev1", 00:11:52.385 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:52.385 "strip_size_kb": 0, 00:11:52.385 "state": "online", 00:11:52.385 "raid_level": "raid1", 00:11:52.385 "superblock": false, 00:11:52.385 "num_base_bdevs": 2, 00:11:52.385 "num_base_bdevs_discovered": 2, 00:11:52.385 "num_base_bdevs_operational": 2, 00:11:52.385 "process": { 00:11:52.385 "type": "rebuild", 00:11:52.385 "target": "spare", 00:11:52.385 "progress": { 00:11:52.385 "blocks": 14336, 00:11:52.385 "percent": 21 00:11:52.385 } 00:11:52.385 }, 00:11:52.385 "base_bdevs_list": [ 00:11:52.385 { 00:11:52.385 "name": "spare", 00:11:52.385 "uuid": "0a05c8cb-0786-5883-8f70-fd7fb1de8dd2", 00:11:52.385 "is_configured": true, 00:11:52.385 "data_offset": 0, 00:11:52.385 "data_size": 65536 00:11:52.385 }, 00:11:52.385 { 00:11:52.385 "name": "BaseBdev2", 00:11:52.385 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:52.385 "is_configured": true, 00:11:52.385 "data_offset": 0, 00:11:52.385 "data_size": 65536 00:11:52.385 } 00:11:52.385 ] 00:11:52.385 }' 00:11:52.385 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.385 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.385 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.385 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.385 13:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:52.385 [2024-11-20 13:25:33.904030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:52.644 117.25 IOPS, 351.75 MiB/s [2024-11-20T13:25:34.312Z] [2024-11-20 13:25:34.232753] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:52.902 [2024-11-20 13:25:34.460743] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:53.160 [2024-11-20 13:25:34.807668] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:53.160 [2024-11-20 13:25:34.808303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.419 "name": "raid_bdev1", 00:11:53.419 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:53.419 "strip_size_kb": 0, 00:11:53.419 "state": "online", 00:11:53.419 "raid_level": "raid1", 00:11:53.419 "superblock": false, 00:11:53.419 "num_base_bdevs": 2, 00:11:53.419 "num_base_bdevs_discovered": 2, 00:11:53.419 "num_base_bdevs_operational": 2, 00:11:53.419 "process": { 00:11:53.419 "type": "rebuild", 00:11:53.419 "target": "spare", 00:11:53.419 "progress": { 00:11:53.419 "blocks": 26624, 00:11:53.419 "percent": 40 00:11:53.419 } 00:11:53.419 }, 00:11:53.419 "base_bdevs_list": [ 00:11:53.419 { 00:11:53.419 "name": "spare", 00:11:53.419 "uuid": "0a05c8cb-0786-5883-8f70-fd7fb1de8dd2", 00:11:53.419 "is_configured": true, 00:11:53.419 "data_offset": 0, 00:11:53.419 "data_size": 65536 00:11:53.419 }, 00:11:53.419 { 00:11:53.419 "name": "BaseBdev2", 00:11:53.419 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:53.419 "is_configured": true, 00:11:53.419 "data_offset": 0, 00:11:53.419 "data_size": 65536 00:11:53.419 } 00:11:53.419 ] 00:11:53.419 }' 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.419 13:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.419 13:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.419 13:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:53.419 [2024-11-20 13:25:35.030538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:53.419 [2024-11-20 13:25:35.030891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:53.678 106.60 IOPS, 319.80 MiB/s [2024-11-20T13:25:35.346Z] [2024-11-20 13:25:35.265944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:54.247 [2024-11-20 13:25:35.716786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:54.508 [2024-11-20 13:25:35.940116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.508 [2024-11-20 13:25:36.054991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.508 "name": "raid_bdev1", 00:11:54.508 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:54.508 "strip_size_kb": 0, 00:11:54.508 "state": "online", 00:11:54.508 "raid_level": "raid1", 00:11:54.508 "superblock": false, 00:11:54.508 "num_base_bdevs": 2, 00:11:54.508 "num_base_bdevs_discovered": 2, 00:11:54.508 "num_base_bdevs_operational": 2, 00:11:54.508 "process": { 00:11:54.508 "type": "rebuild", 00:11:54.508 "target": "spare", 00:11:54.508 "progress": { 00:11:54.508 "blocks": 45056, 00:11:54.508 "percent": 68 00:11:54.508 } 00:11:54.508 }, 00:11:54.508 "base_bdevs_list": [ 00:11:54.508 { 00:11:54.508 "name": "spare", 00:11:54.508 "uuid": "0a05c8cb-0786-5883-8f70-fd7fb1de8dd2", 00:11:54.508 "is_configured": true, 00:11:54.508 "data_offset": 0, 00:11:54.508 "data_size": 65536 00:11:54.508 }, 00:11:54.508 { 00:11:54.508 "name": "BaseBdev2", 00:11:54.508 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:54.508 "is_configured": true, 00:11:54.508 "data_offset": 0, 00:11:54.508 "data_size": 65536 00:11:54.508 } 00:11:54.508 ] 00:11:54.508 }' 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.508 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:54.508 97.33 IOPS, 292.00 MiB/s [2024-11-20T13:25:36.176Z] 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.771 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:54.771 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:55.028 [2024-11-20 13:25:36.601121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:11:55.595 [2024-11-20 13:25:37.050212] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:55.595 87.71 IOPS, 263.14 MiB/s [2024-11-20T13:25:37.263Z] [2024-11-20 13:25:37.150002] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:55.595 [2024-11-20 13:25:37.152012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.595 "name": "raid_bdev1", 00:11:55.595 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:55.595 "strip_size_kb": 0, 00:11:55.595 "state": "online", 00:11:55.595 "raid_level": "raid1", 00:11:55.595 "superblock": false, 00:11:55.595 "num_base_bdevs": 2, 00:11:55.595 "num_base_bdevs_discovered": 2, 00:11:55.595 "num_base_bdevs_operational": 2, 00:11:55.595 "base_bdevs_list": [ 00:11:55.595 { 00:11:55.595 "name": "spare", 00:11:55.595 "uuid": "0a05c8cb-0786-5883-8f70-fd7fb1de8dd2", 00:11:55.595 "is_configured": true, 00:11:55.595 "data_offset": 0, 00:11:55.595 "data_size": 65536 00:11:55.595 }, 00:11:55.595 { 00:11:55.595 "name": "BaseBdev2", 00:11:55.595 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:55.595 "is_configured": true, 00:11:55.595 "data_offset": 0, 00:11:55.595 "data_size": 65536 00:11:55.595 } 00:11:55.595 ] 00:11:55.595 }' 00:11:55.595 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.855 "name": "raid_bdev1", 00:11:55.855 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:55.855 "strip_size_kb": 0, 00:11:55.855 "state": "online", 00:11:55.855 "raid_level": "raid1", 00:11:55.855 "superblock": false, 00:11:55.855 "num_base_bdevs": 2, 00:11:55.855 "num_base_bdevs_discovered": 2, 00:11:55.855 "num_base_bdevs_operational": 2, 00:11:55.855 "base_bdevs_list": [ 00:11:55.855 { 00:11:55.855 "name": "spare", 00:11:55.855 "uuid": "0a05c8cb-0786-5883-8f70-fd7fb1de8dd2", 00:11:55.855 "is_configured": true, 00:11:55.855 "data_offset": 0, 00:11:55.855 "data_size": 65536 00:11:55.855 }, 00:11:55.855 { 00:11:55.855 "name": "BaseBdev2", 00:11:55.855 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:55.855 "is_configured": true, 00:11:55.855 "data_offset": 0, 00:11:55.855 "data_size": 65536 00:11:55.855 } 00:11:55.855 ] 00:11:55.855 }' 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.855 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.114 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.114 "name": "raid_bdev1", 00:11:56.114 "uuid": "4cfb1e86-8a85-4ef6-8686-2e539053e2e4", 00:11:56.114 "strip_size_kb": 0, 00:11:56.114 "state": "online", 00:11:56.114 "raid_level": "raid1", 00:11:56.114 "superblock": false, 00:11:56.114 "num_base_bdevs": 2, 00:11:56.114 "num_base_bdevs_discovered": 2, 00:11:56.114 "num_base_bdevs_operational": 2, 00:11:56.114 "base_bdevs_list": [ 00:11:56.114 { 00:11:56.114 "name": "spare", 00:11:56.114 "uuid": "0a05c8cb-0786-5883-8f70-fd7fb1de8dd2", 00:11:56.114 "is_configured": true, 00:11:56.114 "data_offset": 0, 00:11:56.114 "data_size": 65536 00:11:56.114 }, 00:11:56.114 { 00:11:56.114 "name": "BaseBdev2", 00:11:56.114 "uuid": "a7c37431-4c6e-5aa7-9f70-959f8f2f18b5", 00:11:56.114 "is_configured": true, 00:11:56.114 "data_offset": 0, 00:11:56.114 "data_size": 65536 00:11:56.114 } 00:11:56.114 ] 00:11:56.114 }' 00:11:56.114 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.114 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.373 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:56.373 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.373 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.373 [2024-11-20 13:25:37.940394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:56.373 [2024-11-20 13:25:37.940456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:56.632 00:11:56.632 Latency(us) 00:11:56.632 [2024-11-20T13:25:38.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.632 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:56.632 raid_bdev1 : 7.93 82.83 248.49 0.00 0.00 15697.61 289.76 114473.36 00:11:56.632 [2024-11-20T13:25:38.300Z] =================================================================================================================== 00:11:56.632 [2024-11-20T13:25:38.300Z] Total : 82.83 248.49 0.00 0.00 15697.61 289.76 114473.36 00:11:56.632 [2024-11-20 13:25:38.044944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.632 [2024-11-20 13:25:38.045028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.632 [2024-11-20 13:25:38.045131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.632 [2024-11-20 13:25:38.045146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:56.632 { 00:11:56.632 "results": [ 00:11:56.632 { 00:11:56.632 "job": "raid_bdev1", 00:11:56.632 "core_mask": "0x1", 00:11:56.632 "workload": "randrw", 00:11:56.632 "percentage": 50, 00:11:56.632 "status": "finished", 00:11:56.632 "queue_depth": 2, 00:11:56.632 "io_size": 3145728, 00:11:56.632 "runtime": 7.932049, 00:11:56.632 "iops": 82.82853522463111, 00:11:56.632 "mibps": 248.48560567389333, 00:11:56.632 "io_failed": 0, 00:11:56.632 "io_timeout": 0, 00:11:56.632 "avg_latency_us": 15697.609028733224, 00:11:56.632 "min_latency_us": 289.7606986899563, 00:11:56.632 "max_latency_us": 114473.36244541485 00:11:56.632 } 00:11:56.632 ], 00:11:56.632 "core_count": 1 00:11:56.632 } 00:11:56.632 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.632 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:56.633 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:56.891 /dev/nbd0 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.891 1+0 records in 00:11:56.891 1+0 records out 00:11:56.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474812 s, 8.6 MB/s 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:56.891 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:57.150 /dev/nbd1 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.150 1+0 records in 00:11:57.150 1+0 records out 00:11:57.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585446 s, 7.0 MB/s 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:57.150 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:57.409 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:57.409 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:57.409 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:57.409 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:57.409 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:57.409 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:57.409 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:57.409 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:57.410 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:57.410 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:57.410 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:57.410 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:57.410 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:57.410 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:57.410 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 86839 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 86839 ']' 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 86839 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86839 00:11:57.668 killing process with pid 86839 00:11:57.668 Received shutdown signal, test time was about 9.204078 seconds 00:11:57.668 00:11:57.668 Latency(us) 00:11:57.668 [2024-11-20T13:25:39.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.668 [2024-11-20T13:25:39.336Z] =================================================================================================================== 00:11:57.668 [2024-11-20T13:25:39.336Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86839' 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 86839 00:11:57.668 [2024-11-20 13:25:39.309635] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.668 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 86839 00:11:57.926 [2024-11-20 13:25:39.337551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.926 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:57.926 00:11:57.926 real 0m11.117s 00:11:57.926 user 0m14.481s 00:11:57.926 sys 0m1.401s 00:11:57.926 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.926 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.926 ************************************ 00:11:57.926 END TEST raid_rebuild_test_io 00:11:57.926 ************************************ 00:11:58.185 13:25:39 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:58.185 13:25:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:58.185 13:25:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.185 13:25:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:58.185 ************************************ 00:11:58.185 START TEST raid_rebuild_test_sb_io 00:11:58.185 ************************************ 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87198 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87198 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 87198 ']' 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.185 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.185 [2024-11-20 13:25:39.722010] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:58.185 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:58.185 Zero copy mechanism will not be used. 00:11:58.185 [2024-11-20 13:25:39.722226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87198 ] 00:11:58.444 [2024-11-20 13:25:39.876324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.444 [2024-11-20 13:25:39.905021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.444 [2024-11-20 13:25:39.950824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.444 [2024-11-20 13:25:39.950946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.014 BaseBdev1_malloc 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.014 [2024-11-20 13:25:40.642754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:59.014 [2024-11-20 13:25:40.642817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.014 [2024-11-20 13:25:40.642851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:59.014 [2024-11-20 13:25:40.642864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.014 [2024-11-20 13:25:40.645384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.014 [2024-11-20 13:25:40.645425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:59.014 BaseBdev1 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.014 BaseBdev2_malloc 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.014 [2024-11-20 13:25:40.671830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:59.014 [2024-11-20 13:25:40.671889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.014 [2024-11-20 13:25:40.671928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:59.014 [2024-11-20 13:25:40.671938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.014 [2024-11-20 13:25:40.674398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.014 [2024-11-20 13:25:40.674443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:59.014 BaseBdev2 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.014 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.274 spare_malloc 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.274 spare_delay 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.274 [2024-11-20 13:25:40.713165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:59.274 [2024-11-20 13:25:40.713280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.274 [2024-11-20 13:25:40.713313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:59.274 [2024-11-20 13:25:40.713323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.274 [2024-11-20 13:25:40.715850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.274 [2024-11-20 13:25:40.715886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:59.274 spare 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.274 [2024-11-20 13:25:40.725219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.274 [2024-11-20 13:25:40.727375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.274 [2024-11-20 13:25:40.727603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:59.274 [2024-11-20 13:25:40.727669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:59.274 [2024-11-20 13:25:40.728032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:59.274 [2024-11-20 13:25:40.728258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:59.274 [2024-11-20 13:25:40.728313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:59.274 [2024-11-20 13:25:40.728510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.274 "name": "raid_bdev1", 00:11:59.274 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:11:59.274 "strip_size_kb": 0, 00:11:59.274 "state": "online", 00:11:59.274 "raid_level": "raid1", 00:11:59.274 "superblock": true, 00:11:59.274 "num_base_bdevs": 2, 00:11:59.274 "num_base_bdevs_discovered": 2, 00:11:59.274 "num_base_bdevs_operational": 2, 00:11:59.274 "base_bdevs_list": [ 00:11:59.274 { 00:11:59.274 "name": "BaseBdev1", 00:11:59.274 "uuid": "ab8db887-be62-59cf-b9e8-5b70acba0078", 00:11:59.274 "is_configured": true, 00:11:59.274 "data_offset": 2048, 00:11:59.274 "data_size": 63488 00:11:59.274 }, 00:11:59.274 { 00:11:59.274 "name": "BaseBdev2", 00:11:59.274 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:11:59.274 "is_configured": true, 00:11:59.274 "data_offset": 2048, 00:11:59.274 "data_size": 63488 00:11:59.274 } 00:11:59.274 ] 00:11:59.274 }' 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.274 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.843 [2024-11-20 13:25:41.252623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.843 [2024-11-20 13:25:41.352152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.843 "name": "raid_bdev1", 00:11:59.843 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:11:59.843 "strip_size_kb": 0, 00:11:59.843 "state": "online", 00:11:59.843 "raid_level": "raid1", 00:11:59.843 "superblock": true, 00:11:59.843 "num_base_bdevs": 2, 00:11:59.843 "num_base_bdevs_discovered": 1, 00:11:59.843 "num_base_bdevs_operational": 1, 00:11:59.843 "base_bdevs_list": [ 00:11:59.843 { 00:11:59.843 "name": null, 00:11:59.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.843 "is_configured": false, 00:11:59.843 "data_offset": 0, 00:11:59.843 "data_size": 63488 00:11:59.843 }, 00:11:59.843 { 00:11:59.843 "name": "BaseBdev2", 00:11:59.843 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:11:59.843 "is_configured": true, 00:11:59.843 "data_offset": 2048, 00:11:59.843 "data_size": 63488 00:11:59.843 } 00:11:59.843 ] 00:11:59.843 }' 00:11:59.843 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.844 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.844 [2024-11-20 13:25:41.454119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:59.844 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:59.844 Zero copy mechanism will not be used. 00:11:59.844 Running I/O for 60 seconds... 00:12:00.416 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:00.416 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.416 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.416 [2024-11-20 13:25:41.807165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:00.416 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.416 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:00.416 [2024-11-20 13:25:41.858652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:00.416 [2024-11-20 13:25:41.860969] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.416 [2024-11-20 13:25:41.982477] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:00.416 [2024-11-20 13:25:41.983042] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:00.675 [2024-11-20 13:25:42.198985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:00.675 [2024-11-20 13:25:42.199331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:01.193 163.00 IOPS, 489.00 MiB/s [2024-11-20T13:25:42.861Z] [2024-11-20 13:25:42.708681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:01.193 [2024-11-20 13:25:42.709122] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.453 "name": "raid_bdev1", 00:12:01.453 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:01.453 "strip_size_kb": 0, 00:12:01.453 "state": "online", 00:12:01.453 "raid_level": "raid1", 00:12:01.453 "superblock": true, 00:12:01.453 "num_base_bdevs": 2, 00:12:01.453 "num_base_bdevs_discovered": 2, 00:12:01.453 "num_base_bdevs_operational": 2, 00:12:01.453 "process": { 00:12:01.453 "type": "rebuild", 00:12:01.453 "target": "spare", 00:12:01.453 "progress": { 00:12:01.453 "blocks": 12288, 00:12:01.453 "percent": 19 00:12:01.453 } 00:12:01.453 }, 00:12:01.453 "base_bdevs_list": [ 00:12:01.453 { 00:12:01.453 "name": "spare", 00:12:01.453 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:01.453 "is_configured": true, 00:12:01.453 "data_offset": 2048, 00:12:01.453 "data_size": 63488 00:12:01.453 }, 00:12:01.453 { 00:12:01.453 "name": "BaseBdev2", 00:12:01.453 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:01.453 "is_configured": true, 00:12:01.453 "data_offset": 2048, 00:12:01.453 "data_size": 63488 00:12:01.453 } 00:12:01.453 ] 00:12:01.453 }' 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.453 [2024-11-20 13:25:42.941500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.453 13:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.453 [2024-11-20 13:25:43.021853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.453 [2024-11-20 13:25:43.070412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:01.453 [2024-11-20 13:25:43.070810] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:01.453 [2024-11-20 13:25:43.077969] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:01.453 [2024-11-20 13:25:43.085912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.453 [2024-11-20 13:25:43.085988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.453 [2024-11-20 13:25:43.086030] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:01.453 [2024-11-20 13:25:43.104285] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.453 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.712 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.712 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.712 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.712 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.712 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.712 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.712 "name": "raid_bdev1", 00:12:01.712 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:01.712 "strip_size_kb": 0, 00:12:01.712 "state": "online", 00:12:01.712 "raid_level": "raid1", 00:12:01.712 "superblock": true, 00:12:01.712 "num_base_bdevs": 2, 00:12:01.712 "num_base_bdevs_discovered": 1, 00:12:01.712 "num_base_bdevs_operational": 1, 00:12:01.712 "base_bdevs_list": [ 00:12:01.712 { 00:12:01.712 "name": null, 00:12:01.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.712 "is_configured": false, 00:12:01.712 "data_offset": 0, 00:12:01.712 "data_size": 63488 00:12:01.712 }, 00:12:01.712 { 00:12:01.712 "name": "BaseBdev2", 00:12:01.712 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:01.712 "is_configured": true, 00:12:01.712 "data_offset": 2048, 00:12:01.712 "data_size": 63488 00:12:01.712 } 00:12:01.712 ] 00:12:01.712 }' 00:12:01.712 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.712 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.971 160.00 IOPS, 480.00 MiB/s [2024-11-20T13:25:43.639Z] 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:01.971 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.972 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:01.972 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:01.972 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.972 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.972 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.972 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.972 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.972 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.230 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.230 "name": "raid_bdev1", 00:12:02.230 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:02.230 "strip_size_kb": 0, 00:12:02.230 "state": "online", 00:12:02.230 "raid_level": "raid1", 00:12:02.230 "superblock": true, 00:12:02.231 "num_base_bdevs": 2, 00:12:02.231 "num_base_bdevs_discovered": 1, 00:12:02.231 "num_base_bdevs_operational": 1, 00:12:02.231 "base_bdevs_list": [ 00:12:02.231 { 00:12:02.231 "name": null, 00:12:02.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.231 "is_configured": false, 00:12:02.231 "data_offset": 0, 00:12:02.231 "data_size": 63488 00:12:02.231 }, 00:12:02.231 { 00:12:02.231 "name": "BaseBdev2", 00:12:02.231 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:02.231 "is_configured": true, 00:12:02.231 "data_offset": 2048, 00:12:02.231 "data_size": 63488 00:12:02.231 } 00:12:02.231 ] 00:12:02.231 }' 00:12:02.231 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.231 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.231 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.231 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.231 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:02.231 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.231 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.231 [2024-11-20 13:25:43.754990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:02.231 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.231 13:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:02.231 [2024-11-20 13:25:43.815806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:02.231 [2024-11-20 13:25:43.818178] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:02.489 [2024-11-20 13:25:43.933913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:02.489 [2024-11-20 13:25:43.934630] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:02.489 [2024-11-20 13:25:44.144931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:02.489 [2024-11-20 13:25:44.145400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:03.057 157.67 IOPS, 473.00 MiB/s [2024-11-20T13:25:44.725Z] [2024-11-20 13:25:44.487363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:03.057 [2024-11-20 13:25:44.716123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:03.057 [2024-11-20 13:25:44.716540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.316 "name": "raid_bdev1", 00:12:03.316 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:03.316 "strip_size_kb": 0, 00:12:03.316 "state": "online", 00:12:03.316 "raid_level": "raid1", 00:12:03.316 "superblock": true, 00:12:03.316 "num_base_bdevs": 2, 00:12:03.316 "num_base_bdevs_discovered": 2, 00:12:03.316 "num_base_bdevs_operational": 2, 00:12:03.316 "process": { 00:12:03.316 "type": "rebuild", 00:12:03.316 "target": "spare", 00:12:03.316 "progress": { 00:12:03.316 "blocks": 10240, 00:12:03.316 "percent": 16 00:12:03.316 } 00:12:03.316 }, 00:12:03.316 "base_bdevs_list": [ 00:12:03.316 { 00:12:03.316 "name": "spare", 00:12:03.316 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:03.316 "is_configured": true, 00:12:03.316 "data_offset": 2048, 00:12:03.316 "data_size": 63488 00:12:03.316 }, 00:12:03.316 { 00:12:03.316 "name": "BaseBdev2", 00:12:03.316 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:03.316 "is_configured": true, 00:12:03.316 "data_offset": 2048, 00:12:03.316 "data_size": 63488 00:12:03.316 } 00:12:03.316 ] 00:12:03.316 }' 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:03.316 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=333 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.316 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.574 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.574 "name": "raid_bdev1", 00:12:03.574 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:03.574 "strip_size_kb": 0, 00:12:03.574 "state": "online", 00:12:03.574 "raid_level": "raid1", 00:12:03.574 "superblock": true, 00:12:03.574 "num_base_bdevs": 2, 00:12:03.574 "num_base_bdevs_discovered": 2, 00:12:03.574 "num_base_bdevs_operational": 2, 00:12:03.574 "process": { 00:12:03.574 "type": "rebuild", 00:12:03.574 "target": "spare", 00:12:03.574 "progress": { 00:12:03.574 "blocks": 12288, 00:12:03.574 "percent": 19 00:12:03.574 } 00:12:03.574 }, 00:12:03.574 "base_bdevs_list": [ 00:12:03.574 { 00:12:03.574 "name": "spare", 00:12:03.574 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:03.574 "is_configured": true, 00:12:03.574 "data_offset": 2048, 00:12:03.574 "data_size": 63488 00:12:03.574 }, 00:12:03.574 { 00:12:03.574 "name": "BaseBdev2", 00:12:03.574 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:03.574 "is_configured": true, 00:12:03.574 "data_offset": 2048, 00:12:03.574 "data_size": 63488 00:12:03.574 } 00:12:03.574 ] 00:12:03.574 }' 00:12:03.574 13:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.574 13:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.574 13:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.574 [2024-11-20 13:25:45.044892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:03.574 13:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.574 13:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:04.093 129.75 IOPS, 389.25 MiB/s [2024-11-20T13:25:45.761Z] [2024-11-20 13:25:45.726145] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:04.351 [2024-11-20 13:25:45.834144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.611 "name": "raid_bdev1", 00:12:04.611 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:04.611 "strip_size_kb": 0, 00:12:04.611 "state": "online", 00:12:04.611 "raid_level": "raid1", 00:12:04.611 "superblock": true, 00:12:04.611 "num_base_bdevs": 2, 00:12:04.611 "num_base_bdevs_discovered": 2, 00:12:04.611 "num_base_bdevs_operational": 2, 00:12:04.611 "process": { 00:12:04.611 "type": "rebuild", 00:12:04.611 "target": "spare", 00:12:04.611 "progress": { 00:12:04.611 "blocks": 32768, 00:12:04.611 "percent": 51 00:12:04.611 } 00:12:04.611 }, 00:12:04.611 "base_bdevs_list": [ 00:12:04.611 { 00:12:04.611 "name": "spare", 00:12:04.611 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:04.611 "is_configured": true, 00:12:04.611 "data_offset": 2048, 00:12:04.611 "data_size": 63488 00:12:04.611 }, 00:12:04.611 { 00:12:04.611 "name": "BaseBdev2", 00:12:04.611 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:04.611 "is_configured": true, 00:12:04.611 "data_offset": 2048, 00:12:04.611 "data_size": 63488 00:12:04.611 } 00:12:04.611 ] 00:12:04.611 }' 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.611 [2024-11-20 13:25:46.184942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.611 13:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:04.870 [2024-11-20 13:25:46.403783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:04.870 112.20 IOPS, 336.60 MiB/s [2024-11-20T13:25:46.538Z] [2024-11-20 13:25:46.524735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:05.438 [2024-11-20 13:25:46.849572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:05.697 [2024-11-20 13:25:47.182588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:05.697 [2024-11-20 13:25:47.183254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.697 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.697 "name": "raid_bdev1", 00:12:05.697 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:05.697 "strip_size_kb": 0, 00:12:05.697 "state": "online", 00:12:05.697 "raid_level": "raid1", 00:12:05.697 "superblock": true, 00:12:05.697 "num_base_bdevs": 2, 00:12:05.697 "num_base_bdevs_discovered": 2, 00:12:05.697 "num_base_bdevs_operational": 2, 00:12:05.697 "process": { 00:12:05.697 "type": "rebuild", 00:12:05.697 "target": "spare", 00:12:05.697 "progress": { 00:12:05.697 "blocks": 51200, 00:12:05.697 "percent": 80 00:12:05.697 } 00:12:05.697 }, 00:12:05.697 "base_bdevs_list": [ 00:12:05.697 { 00:12:05.697 "name": "spare", 00:12:05.697 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:05.697 "is_configured": true, 00:12:05.697 "data_offset": 2048, 00:12:05.697 "data_size": 63488 00:12:05.697 }, 00:12:05.697 { 00:12:05.697 "name": "BaseBdev2", 00:12:05.698 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:05.698 "is_configured": true, 00:12:05.698 "data_offset": 2048, 00:12:05.698 "data_size": 63488 00:12:05.698 } 00:12:05.698 ] 00:12:05.698 }' 00:12:05.698 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.698 [2024-11-20 13:25:47.300784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:05.698 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.698 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.957 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.957 13:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:06.216 100.17 IOPS, 300.50 MiB/s [2024-11-20T13:25:47.884Z] [2024-11-20 13:25:47.846969] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:06.475 [2024-11-20 13:25:47.953717] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:06.475 [2024-11-20 13:25:47.956615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.735 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:06.735 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.735 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.735 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.735 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.735 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.735 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.735 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.735 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.735 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.995 "name": "raid_bdev1", 00:12:06.995 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:06.995 "strip_size_kb": 0, 00:12:06.995 "state": "online", 00:12:06.995 "raid_level": "raid1", 00:12:06.995 "superblock": true, 00:12:06.995 "num_base_bdevs": 2, 00:12:06.995 "num_base_bdevs_discovered": 2, 00:12:06.995 "num_base_bdevs_operational": 2, 00:12:06.995 "base_bdevs_list": [ 00:12:06.995 { 00:12:06.995 "name": "spare", 00:12:06.995 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:06.995 "is_configured": true, 00:12:06.995 "data_offset": 2048, 00:12:06.995 "data_size": 63488 00:12:06.995 }, 00:12:06.995 { 00:12:06.995 "name": "BaseBdev2", 00:12:06.995 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:06.995 "is_configured": true, 00:12:06.995 "data_offset": 2048, 00:12:06.995 "data_size": 63488 00:12:06.995 } 00:12:06.995 ] 00:12:06.995 }' 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.995 90.71 IOPS, 272.14 MiB/s [2024-11-20T13:25:48.663Z] 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.995 "name": "raid_bdev1", 00:12:06.995 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:06.995 "strip_size_kb": 0, 00:12:06.995 "state": "online", 00:12:06.995 "raid_level": "raid1", 00:12:06.995 "superblock": true, 00:12:06.995 "num_base_bdevs": 2, 00:12:06.995 "num_base_bdevs_discovered": 2, 00:12:06.995 "num_base_bdevs_operational": 2, 00:12:06.995 "base_bdevs_list": [ 00:12:06.995 { 00:12:06.995 "name": "spare", 00:12:06.995 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:06.995 "is_configured": true, 00:12:06.995 "data_offset": 2048, 00:12:06.995 "data_size": 63488 00:12:06.995 }, 00:12:06.995 { 00:12:06.995 "name": "BaseBdev2", 00:12:06.995 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:06.995 "is_configured": true, 00:12:06.995 "data_offset": 2048, 00:12:06.995 "data_size": 63488 00:12:06.995 } 00:12:06.995 ] 00:12:06.995 }' 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:06.995 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.254 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.254 "name": "raid_bdev1", 00:12:07.254 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:07.254 "strip_size_kb": 0, 00:12:07.254 "state": "online", 00:12:07.255 "raid_level": "raid1", 00:12:07.255 "superblock": true, 00:12:07.255 "num_base_bdevs": 2, 00:12:07.255 "num_base_bdevs_discovered": 2, 00:12:07.255 "num_base_bdevs_operational": 2, 00:12:07.255 "base_bdevs_list": [ 00:12:07.255 { 00:12:07.255 "name": "spare", 00:12:07.255 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:07.255 "is_configured": true, 00:12:07.255 "data_offset": 2048, 00:12:07.255 "data_size": 63488 00:12:07.255 }, 00:12:07.255 { 00:12:07.255 "name": "BaseBdev2", 00:12:07.255 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:07.255 "is_configured": true, 00:12:07.255 "data_offset": 2048, 00:12:07.255 "data_size": 63488 00:12:07.255 } 00:12:07.255 ] 00:12:07.255 }' 00:12:07.255 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.255 13:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.514 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.514 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.514 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.514 [2024-11-20 13:25:49.090504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.514 [2024-11-20 13:25:49.090612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.514 00:12:07.514 Latency(us) 00:12:07.514 [2024-11-20T13:25:49.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.514 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:07.514 raid_bdev1 : 7.72 85.27 255.82 0.00 0.00 15772.92 325.53 114931.26 00:12:07.514 [2024-11-20T13:25:49.182Z] =================================================================================================================== 00:12:07.514 [2024-11-20T13:25:49.183Z] Total : 85.27 255.82 0.00 0.00 15772.92 325.53 114931.26 00:12:07.515 { 00:12:07.515 "results": [ 00:12:07.515 { 00:12:07.515 "job": "raid_bdev1", 00:12:07.515 "core_mask": "0x1", 00:12:07.515 "workload": "randrw", 00:12:07.515 "percentage": 50, 00:12:07.515 "status": "finished", 00:12:07.515 "queue_depth": 2, 00:12:07.515 "io_size": 3145728, 00:12:07.515 "runtime": 7.716505, 00:12:07.515 "iops": 85.27176487282779, 00:12:07.515 "mibps": 255.81529461848336, 00:12:07.515 "io_failed": 0, 00:12:07.515 "io_timeout": 0, 00:12:07.515 "avg_latency_us": 15772.92313083182, 00:12:07.515 "min_latency_us": 325.5336244541485, 00:12:07.515 "max_latency_us": 114931.2558951965 00:12:07.515 } 00:12:07.515 ], 00:12:07.515 "core_count": 1 00:12:07.515 } 00:12:07.515 [2024-11-20 13:25:49.162684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.515 [2024-11-20 13:25:49.162743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.515 [2024-11-20 13:25:49.162837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.515 [2024-11-20 13:25:49.162857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:07.515 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.515 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.515 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:07.515 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.515 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.515 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:07.773 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:08.032 /dev/nbd0 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.032 1+0 records in 00:12:08.032 1+0 records out 00:12:08.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044953 s, 9.1 MB/s 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:08.032 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:08.291 /dev/nbd1 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.291 1+0 records in 00:12:08.291 1+0 records out 00:12:08.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525261 s, 7.8 MB/s 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.291 13:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.550 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.810 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.068 [2024-11-20 13:25:50.478763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:09.069 [2024-11-20 13:25:50.478858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.069 [2024-11-20 13:25:50.478883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:09.069 [2024-11-20 13:25:50.478897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.069 [2024-11-20 13:25:50.481424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.069 [2024-11-20 13:25:50.481476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:09.069 [2024-11-20 13:25:50.481579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:09.069 [2024-11-20 13:25:50.481624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:09.069 [2024-11-20 13:25:50.481742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.069 spare 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.069 [2024-11-20 13:25:50.581670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:09.069 [2024-11-20 13:25:50.581721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:09.069 [2024-11-20 13:25:50.582145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:12:09.069 [2024-11-20 13:25:50.582365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:09.069 [2024-11-20 13:25:50.582383] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:09.069 [2024-11-20 13:25:50.582617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.069 "name": "raid_bdev1", 00:12:09.069 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:09.069 "strip_size_kb": 0, 00:12:09.069 "state": "online", 00:12:09.069 "raid_level": "raid1", 00:12:09.069 "superblock": true, 00:12:09.069 "num_base_bdevs": 2, 00:12:09.069 "num_base_bdevs_discovered": 2, 00:12:09.069 "num_base_bdevs_operational": 2, 00:12:09.069 "base_bdevs_list": [ 00:12:09.069 { 00:12:09.069 "name": "spare", 00:12:09.069 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:09.069 "is_configured": true, 00:12:09.069 "data_offset": 2048, 00:12:09.069 "data_size": 63488 00:12:09.069 }, 00:12:09.069 { 00:12:09.069 "name": "BaseBdev2", 00:12:09.069 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:09.069 "is_configured": true, 00:12:09.069 "data_offset": 2048, 00:12:09.069 "data_size": 63488 00:12:09.069 } 00:12:09.069 ] 00:12:09.069 }' 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.069 13:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.636 "name": "raid_bdev1", 00:12:09.636 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:09.636 "strip_size_kb": 0, 00:12:09.636 "state": "online", 00:12:09.636 "raid_level": "raid1", 00:12:09.636 "superblock": true, 00:12:09.636 "num_base_bdevs": 2, 00:12:09.636 "num_base_bdevs_discovered": 2, 00:12:09.636 "num_base_bdevs_operational": 2, 00:12:09.636 "base_bdevs_list": [ 00:12:09.636 { 00:12:09.636 "name": "spare", 00:12:09.636 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:09.636 "is_configured": true, 00:12:09.636 "data_offset": 2048, 00:12:09.636 "data_size": 63488 00:12:09.636 }, 00:12:09.636 { 00:12:09.636 "name": "BaseBdev2", 00:12:09.636 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:09.636 "is_configured": true, 00:12:09.636 "data_offset": 2048, 00:12:09.636 "data_size": 63488 00:12:09.636 } 00:12:09.636 ] 00:12:09.636 }' 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.636 [2024-11-20 13:25:51.233801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.636 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.637 "name": "raid_bdev1", 00:12:09.637 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:09.637 "strip_size_kb": 0, 00:12:09.637 "state": "online", 00:12:09.637 "raid_level": "raid1", 00:12:09.637 "superblock": true, 00:12:09.637 "num_base_bdevs": 2, 00:12:09.637 "num_base_bdevs_discovered": 1, 00:12:09.637 "num_base_bdevs_operational": 1, 00:12:09.637 "base_bdevs_list": [ 00:12:09.637 { 00:12:09.637 "name": null, 00:12:09.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.637 "is_configured": false, 00:12:09.637 "data_offset": 0, 00:12:09.637 "data_size": 63488 00:12:09.637 }, 00:12:09.637 { 00:12:09.637 "name": "BaseBdev2", 00:12:09.637 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:09.637 "is_configured": true, 00:12:09.637 "data_offset": 2048, 00:12:09.637 "data_size": 63488 00:12:09.637 } 00:12:09.637 ] 00:12:09.637 }' 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.637 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.206 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:10.206 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.206 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.206 [2024-11-20 13:25:51.705117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:10.206 [2024-11-20 13:25:51.705360] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:10.206 [2024-11-20 13:25:51.705377] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:10.206 [2024-11-20 13:25:51.705428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:10.206 [2024-11-20 13:25:51.710936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:12:10.206 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.206 13:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:10.206 [2024-11-20 13:25:51.713265] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:11.142 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.142 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.142 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.142 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.142 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.142 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.142 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.143 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.143 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.143 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.143 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.143 "name": "raid_bdev1", 00:12:11.143 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:11.143 "strip_size_kb": 0, 00:12:11.143 "state": "online", 00:12:11.143 "raid_level": "raid1", 00:12:11.143 "superblock": true, 00:12:11.143 "num_base_bdevs": 2, 00:12:11.143 "num_base_bdevs_discovered": 2, 00:12:11.143 "num_base_bdevs_operational": 2, 00:12:11.143 "process": { 00:12:11.143 "type": "rebuild", 00:12:11.143 "target": "spare", 00:12:11.143 "progress": { 00:12:11.143 "blocks": 20480, 00:12:11.143 "percent": 32 00:12:11.143 } 00:12:11.143 }, 00:12:11.143 "base_bdevs_list": [ 00:12:11.143 { 00:12:11.143 "name": "spare", 00:12:11.143 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:11.143 "is_configured": true, 00:12:11.143 "data_offset": 2048, 00:12:11.143 "data_size": 63488 00:12:11.143 }, 00:12:11.143 { 00:12:11.143 "name": "BaseBdev2", 00:12:11.143 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:11.143 "is_configured": true, 00:12:11.143 "data_offset": 2048, 00:12:11.143 "data_size": 63488 00:12:11.143 } 00:12:11.143 ] 00:12:11.143 }' 00:12:11.143 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.402 [2024-11-20 13:25:52.861395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.402 [2024-11-20 13:25:52.918968] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:11.402 [2024-11-20 13:25:52.919193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.402 [2024-11-20 13:25:52.919252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.402 [2024-11-20 13:25:52.919291] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.402 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.403 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.403 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.403 "name": "raid_bdev1", 00:12:11.403 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:11.403 "strip_size_kb": 0, 00:12:11.403 "state": "online", 00:12:11.403 "raid_level": "raid1", 00:12:11.403 "superblock": true, 00:12:11.403 "num_base_bdevs": 2, 00:12:11.403 "num_base_bdevs_discovered": 1, 00:12:11.403 "num_base_bdevs_operational": 1, 00:12:11.403 "base_bdevs_list": [ 00:12:11.403 { 00:12:11.403 "name": null, 00:12:11.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.403 "is_configured": false, 00:12:11.403 "data_offset": 0, 00:12:11.403 "data_size": 63488 00:12:11.403 }, 00:12:11.403 { 00:12:11.403 "name": "BaseBdev2", 00:12:11.403 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:11.403 "is_configured": true, 00:12:11.403 "data_offset": 2048, 00:12:11.403 "data_size": 63488 00:12:11.403 } 00:12:11.403 ] 00:12:11.403 }' 00:12:11.403 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.403 13:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.971 13:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:11.971 13:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.971 13:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.971 [2024-11-20 13:25:53.439941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:11.971 [2024-11-20 13:25:53.440132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.971 [2024-11-20 13:25:53.440185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:11.971 [2024-11-20 13:25:53.440257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.971 [2024-11-20 13:25:53.440822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.971 [2024-11-20 13:25:53.440900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:11.971 [2024-11-20 13:25:53.441073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:11.971 [2024-11-20 13:25:53.441124] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:11.971 [2024-11-20 13:25:53.441183] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:11.971 [2024-11-20 13:25:53.441264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.971 [2024-11-20 13:25:53.446956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:12:11.971 spare 00:12:11.971 13:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.971 13:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:11.971 [2024-11-20 13:25:53.449345] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.909 "name": "raid_bdev1", 00:12:12.909 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:12.909 "strip_size_kb": 0, 00:12:12.909 "state": "online", 00:12:12.909 "raid_level": "raid1", 00:12:12.909 "superblock": true, 00:12:12.909 "num_base_bdevs": 2, 00:12:12.909 "num_base_bdevs_discovered": 2, 00:12:12.909 "num_base_bdevs_operational": 2, 00:12:12.909 "process": { 00:12:12.909 "type": "rebuild", 00:12:12.909 "target": "spare", 00:12:12.909 "progress": { 00:12:12.909 "blocks": 20480, 00:12:12.909 "percent": 32 00:12:12.909 } 00:12:12.909 }, 00:12:12.909 "base_bdevs_list": [ 00:12:12.909 { 00:12:12.909 "name": "spare", 00:12:12.909 "uuid": "2bf1fe83-3bbe-5cf4-87d7-8df93e9a42b3", 00:12:12.909 "is_configured": true, 00:12:12.909 "data_offset": 2048, 00:12:12.909 "data_size": 63488 00:12:12.909 }, 00:12:12.909 { 00:12:12.909 "name": "BaseBdev2", 00:12:12.909 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:12.909 "is_configured": true, 00:12:12.909 "data_offset": 2048, 00:12:12.909 "data_size": 63488 00:12:12.909 } 00:12:12.909 ] 00:12:12.909 }' 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.909 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.167 [2024-11-20 13:25:54.597876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:13.167 [2024-11-20 13:25:54.654843] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:13.167 [2024-11-20 13:25:54.655040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.167 [2024-11-20 13:25:54.655058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:13.167 [2024-11-20 13:25:54.655069] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.167 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.168 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.168 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.168 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.168 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.168 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.168 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.168 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.168 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.168 "name": "raid_bdev1", 00:12:13.168 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:13.168 "strip_size_kb": 0, 00:12:13.168 "state": "online", 00:12:13.168 "raid_level": "raid1", 00:12:13.168 "superblock": true, 00:12:13.168 "num_base_bdevs": 2, 00:12:13.168 "num_base_bdevs_discovered": 1, 00:12:13.168 "num_base_bdevs_operational": 1, 00:12:13.168 "base_bdevs_list": [ 00:12:13.168 { 00:12:13.168 "name": null, 00:12:13.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.168 "is_configured": false, 00:12:13.168 "data_offset": 0, 00:12:13.168 "data_size": 63488 00:12:13.168 }, 00:12:13.168 { 00:12:13.168 "name": "BaseBdev2", 00:12:13.168 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:13.168 "is_configured": true, 00:12:13.168 "data_offset": 2048, 00:12:13.168 "data_size": 63488 00:12:13.168 } 00:12:13.168 ] 00:12:13.168 }' 00:12:13.168 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.168 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.734 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.734 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.735 "name": "raid_bdev1", 00:12:13.735 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:13.735 "strip_size_kb": 0, 00:12:13.735 "state": "online", 00:12:13.735 "raid_level": "raid1", 00:12:13.735 "superblock": true, 00:12:13.735 "num_base_bdevs": 2, 00:12:13.735 "num_base_bdevs_discovered": 1, 00:12:13.735 "num_base_bdevs_operational": 1, 00:12:13.735 "base_bdevs_list": [ 00:12:13.735 { 00:12:13.735 "name": null, 00:12:13.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.735 "is_configured": false, 00:12:13.735 "data_offset": 0, 00:12:13.735 "data_size": 63488 00:12:13.735 }, 00:12:13.735 { 00:12:13.735 "name": "BaseBdev2", 00:12:13.735 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:13.735 "is_configured": true, 00:12:13.735 "data_offset": 2048, 00:12:13.735 "data_size": 63488 00:12:13.735 } 00:12:13.735 ] 00:12:13.735 }' 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.735 [2024-11-20 13:25:55.247752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:13.735 [2024-11-20 13:25:55.247943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.735 [2024-11-20 13:25:55.248015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:13.735 [2024-11-20 13:25:55.248062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.735 [2024-11-20 13:25:55.248562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.735 [2024-11-20 13:25:55.248592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.735 [2024-11-20 13:25:55.248680] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:13.735 [2024-11-20 13:25:55.248714] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:13.735 [2024-11-20 13:25:55.248723] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:13.735 [2024-11-20 13:25:55.248738] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:13.735 BaseBdev1 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.735 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.671 "name": "raid_bdev1", 00:12:14.671 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:14.671 "strip_size_kb": 0, 00:12:14.671 "state": "online", 00:12:14.671 "raid_level": "raid1", 00:12:14.671 "superblock": true, 00:12:14.671 "num_base_bdevs": 2, 00:12:14.671 "num_base_bdevs_discovered": 1, 00:12:14.671 "num_base_bdevs_operational": 1, 00:12:14.671 "base_bdevs_list": [ 00:12:14.671 { 00:12:14.671 "name": null, 00:12:14.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.671 "is_configured": false, 00:12:14.671 "data_offset": 0, 00:12:14.671 "data_size": 63488 00:12:14.671 }, 00:12:14.671 { 00:12:14.671 "name": "BaseBdev2", 00:12:14.671 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:14.671 "is_configured": true, 00:12:14.671 "data_offset": 2048, 00:12:14.671 "data_size": 63488 00:12:14.671 } 00:12:14.671 ] 00:12:14.671 }' 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.671 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.238 "name": "raid_bdev1", 00:12:15.238 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:15.238 "strip_size_kb": 0, 00:12:15.238 "state": "online", 00:12:15.238 "raid_level": "raid1", 00:12:15.238 "superblock": true, 00:12:15.238 "num_base_bdevs": 2, 00:12:15.238 "num_base_bdevs_discovered": 1, 00:12:15.238 "num_base_bdevs_operational": 1, 00:12:15.238 "base_bdevs_list": [ 00:12:15.238 { 00:12:15.238 "name": null, 00:12:15.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.238 "is_configured": false, 00:12:15.238 "data_offset": 0, 00:12:15.238 "data_size": 63488 00:12:15.238 }, 00:12:15.238 { 00:12:15.238 "name": "BaseBdev2", 00:12:15.238 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:15.238 "is_configured": true, 00:12:15.238 "data_offset": 2048, 00:12:15.238 "data_size": 63488 00:12:15.238 } 00:12:15.238 ] 00:12:15.238 }' 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.238 [2024-11-20 13:25:56.869402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.238 [2024-11-20 13:25:56.869593] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:15.238 [2024-11-20 13:25:56.869606] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:15.238 request: 00:12:15.238 { 00:12:15.238 "base_bdev": "BaseBdev1", 00:12:15.238 "raid_bdev": "raid_bdev1", 00:12:15.238 "method": "bdev_raid_add_base_bdev", 00:12:15.238 "req_id": 1 00:12:15.238 } 00:12:15.238 Got JSON-RPC error response 00:12:15.238 response: 00:12:15.238 { 00:12:15.238 "code": -22, 00:12:15.238 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:15.238 } 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:15.238 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.626 "name": "raid_bdev1", 00:12:16.626 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:16.626 "strip_size_kb": 0, 00:12:16.626 "state": "online", 00:12:16.626 "raid_level": "raid1", 00:12:16.626 "superblock": true, 00:12:16.626 "num_base_bdevs": 2, 00:12:16.626 "num_base_bdevs_discovered": 1, 00:12:16.626 "num_base_bdevs_operational": 1, 00:12:16.626 "base_bdevs_list": [ 00:12:16.626 { 00:12:16.626 "name": null, 00:12:16.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.626 "is_configured": false, 00:12:16.626 "data_offset": 0, 00:12:16.626 "data_size": 63488 00:12:16.626 }, 00:12:16.626 { 00:12:16.626 "name": "BaseBdev2", 00:12:16.626 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:16.626 "is_configured": true, 00:12:16.626 "data_offset": 2048, 00:12:16.626 "data_size": 63488 00:12:16.626 } 00:12:16.626 ] 00:12:16.626 }' 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.626 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.885 "name": "raid_bdev1", 00:12:16.885 "uuid": "8042cdbf-1b63-4639-8501-674b87ff8d0e", 00:12:16.885 "strip_size_kb": 0, 00:12:16.885 "state": "online", 00:12:16.885 "raid_level": "raid1", 00:12:16.885 "superblock": true, 00:12:16.885 "num_base_bdevs": 2, 00:12:16.885 "num_base_bdevs_discovered": 1, 00:12:16.885 "num_base_bdevs_operational": 1, 00:12:16.885 "base_bdevs_list": [ 00:12:16.885 { 00:12:16.885 "name": null, 00:12:16.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.885 "is_configured": false, 00:12:16.885 "data_offset": 0, 00:12:16.885 "data_size": 63488 00:12:16.885 }, 00:12:16.885 { 00:12:16.885 "name": "BaseBdev2", 00:12:16.885 "uuid": "0ff51d0c-0286-5d30-9f9a-a93971996b24", 00:12:16.885 "is_configured": true, 00:12:16.885 "data_offset": 2048, 00:12:16.885 "data_size": 63488 00:12:16.885 } 00:12:16.885 ] 00:12:16.885 }' 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87198 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 87198 ']' 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 87198 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.885 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87198 00:12:17.145 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.145 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.145 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87198' 00:12:17.145 killing process with pid 87198 00:12:17.145 Received shutdown signal, test time was about 17.158440 seconds 00:12:17.145 00:12:17.145 Latency(us) 00:12:17.145 [2024-11-20T13:25:58.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.145 [2024-11-20T13:25:58.813Z] =================================================================================================================== 00:12:17.145 [2024-11-20T13:25:58.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:17.145 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 87198 00:12:17.145 [2024-11-20 13:25:58.582049] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.145 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 87198 00:12:17.145 [2024-11-20 13:25:58.582213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.145 [2024-11-20 13:25:58.582285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.145 [2024-11-20 13:25:58.582297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:17.145 [2024-11-20 13:25:58.610406] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:17.405 ************************************ 00:12:17.405 END TEST raid_rebuild_test_sb_io 00:12:17.405 ************************************ 00:12:17.405 00:12:17.405 real 0m19.201s 00:12:17.405 user 0m25.808s 00:12:17.405 sys 0m2.351s 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.405 13:25:58 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:17.405 13:25:58 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:17.405 13:25:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:17.405 13:25:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.405 13:25:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.405 ************************************ 00:12:17.405 START TEST raid_rebuild_test 00:12:17.405 ************************************ 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87880 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87880 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 87880 ']' 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.405 13:25:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.405 [2024-11-20 13:25:59.007116] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:12:17.405 [2024-11-20 13:25:59.007344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87880 ] 00:12:17.405 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:17.405 Zero copy mechanism will not be used. 00:12:17.670 [2024-11-20 13:25:59.164177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.670 [2024-11-20 13:25:59.194821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.670 [2024-11-20 13:25:59.239820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.670 [2024-11-20 13:25:59.239863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.607 BaseBdev1_malloc 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.607 [2024-11-20 13:25:59.986372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:18.607 [2024-11-20 13:25:59.986575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.607 [2024-11-20 13:25:59.986630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:18.607 [2024-11-20 13:25:59.986675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.607 [2024-11-20 13:25:59.989322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.607 [2024-11-20 13:25:59.989440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:18.607 BaseBdev1 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.607 13:25:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.607 BaseBdev2_malloc 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.607 [2024-11-20 13:26:00.015964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:18.607 [2024-11-20 13:26:00.016158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.607 [2024-11-20 13:26:00.016208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:18.607 [2024-11-20 13:26:00.016250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.607 [2024-11-20 13:26:00.018829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.607 [2024-11-20 13:26:00.018933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:18.607 BaseBdev2 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.607 BaseBdev3_malloc 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.607 [2024-11-20 13:26:00.045197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:18.607 [2024-11-20 13:26:00.045378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.607 [2024-11-20 13:26:00.045425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:18.607 [2024-11-20 13:26:00.045458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.607 [2024-11-20 13:26:00.047895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.607 [2024-11-20 13:26:00.048005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:18.607 BaseBdev3 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.607 BaseBdev4_malloc 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.607 [2024-11-20 13:26:00.086838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:18.607 [2024-11-20 13:26:00.087035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.607 [2024-11-20 13:26:00.087076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:18.607 [2024-11-20 13:26:00.087089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.607 [2024-11-20 13:26:00.089757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.607 BaseBdev4 00:12:18.607 [2024-11-20 13:26:00.089914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.607 spare_malloc 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.607 spare_delay 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.607 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.607 [2024-11-20 13:26:00.128524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:18.607 [2024-11-20 13:26:00.128698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.607 [2024-11-20 13:26:00.128746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:18.607 [2024-11-20 13:26:00.128787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.607 [2024-11-20 13:26:00.131295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.608 [2024-11-20 13:26:00.131392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:18.608 spare 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.608 [2024-11-20 13:26:00.140631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.608 [2024-11-20 13:26:00.142822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.608 [2024-11-20 13:26:00.142912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.608 [2024-11-20 13:26:00.142970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:18.608 [2024-11-20 13:26:00.143101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:18.608 [2024-11-20 13:26:00.143115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:18.608 [2024-11-20 13:26:00.143458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:18.608 [2024-11-20 13:26:00.143650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:18.608 [2024-11-20 13:26:00.143671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:18.608 [2024-11-20 13:26:00.143852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.608 "name": "raid_bdev1", 00:12:18.608 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:18.608 "strip_size_kb": 0, 00:12:18.608 "state": "online", 00:12:18.608 "raid_level": "raid1", 00:12:18.608 "superblock": false, 00:12:18.608 "num_base_bdevs": 4, 00:12:18.608 "num_base_bdevs_discovered": 4, 00:12:18.608 "num_base_bdevs_operational": 4, 00:12:18.608 "base_bdevs_list": [ 00:12:18.608 { 00:12:18.608 "name": "BaseBdev1", 00:12:18.608 "uuid": "875d9eca-da88-5701-9f9e-2a318db261b5", 00:12:18.608 "is_configured": true, 00:12:18.608 "data_offset": 0, 00:12:18.608 "data_size": 65536 00:12:18.608 }, 00:12:18.608 { 00:12:18.608 "name": "BaseBdev2", 00:12:18.608 "uuid": "0c4634a1-37d0-53b5-a49b-fef737e17c77", 00:12:18.608 "is_configured": true, 00:12:18.608 "data_offset": 0, 00:12:18.608 "data_size": 65536 00:12:18.608 }, 00:12:18.608 { 00:12:18.608 "name": "BaseBdev3", 00:12:18.608 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:18.608 "is_configured": true, 00:12:18.608 "data_offset": 0, 00:12:18.608 "data_size": 65536 00:12:18.608 }, 00:12:18.608 { 00:12:18.608 "name": "BaseBdev4", 00:12:18.608 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:18.608 "is_configured": true, 00:12:18.608 "data_offset": 0, 00:12:18.608 "data_size": 65536 00:12:18.608 } 00:12:18.608 ] 00:12:18.608 }' 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.608 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:19.176 [2024-11-20 13:26:00.640172] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:19.176 13:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.177 13:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:19.177 13:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.177 13:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.177 13:26:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:19.436 [2024-11-20 13:26:00.975371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:19.436 /dev/nbd0 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.436 1+0 records in 00:12:19.436 1+0 records out 00:12:19.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490145 s, 8.4 MB/s 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:19.436 13:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:27.603 65536+0 records in 00:12:27.603 65536+0 records out 00:12:27.603 33554432 bytes (34 MB, 32 MiB) copied, 6.81392 s, 4.9 MB/s 00:12:27.603 13:26:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:27.603 13:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:27.603 13:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:27.603 13:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.603 13:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:27.603 13:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.603 13:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.603 [2024-11-20 13:26:08.146328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.603 [2024-11-20 13:26:08.162396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.603 13:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.604 "name": "raid_bdev1", 00:12:27.604 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:27.604 "strip_size_kb": 0, 00:12:27.604 "state": "online", 00:12:27.604 "raid_level": "raid1", 00:12:27.604 "superblock": false, 00:12:27.604 "num_base_bdevs": 4, 00:12:27.604 "num_base_bdevs_discovered": 3, 00:12:27.604 "num_base_bdevs_operational": 3, 00:12:27.604 "base_bdevs_list": [ 00:12:27.604 { 00:12:27.604 "name": null, 00:12:27.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.604 "is_configured": false, 00:12:27.604 "data_offset": 0, 00:12:27.604 "data_size": 65536 00:12:27.604 }, 00:12:27.604 { 00:12:27.604 "name": "BaseBdev2", 00:12:27.604 "uuid": "0c4634a1-37d0-53b5-a49b-fef737e17c77", 00:12:27.604 "is_configured": true, 00:12:27.604 "data_offset": 0, 00:12:27.604 "data_size": 65536 00:12:27.604 }, 00:12:27.604 { 00:12:27.604 "name": "BaseBdev3", 00:12:27.604 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:27.604 "is_configured": true, 00:12:27.604 "data_offset": 0, 00:12:27.604 "data_size": 65536 00:12:27.604 }, 00:12:27.604 { 00:12:27.604 "name": "BaseBdev4", 00:12:27.604 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:27.604 "is_configured": true, 00:12:27.604 "data_offset": 0, 00:12:27.604 "data_size": 65536 00:12:27.604 } 00:12:27.604 ] 00:12:27.604 }' 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.604 [2024-11-20 13:26:08.641712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.604 [2024-11-20 13:26:08.646347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.604 13:26:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:27.604 [2024-11-20 13:26:08.648826] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.174 "name": "raid_bdev1", 00:12:28.174 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:28.174 "strip_size_kb": 0, 00:12:28.174 "state": "online", 00:12:28.174 "raid_level": "raid1", 00:12:28.174 "superblock": false, 00:12:28.174 "num_base_bdevs": 4, 00:12:28.174 "num_base_bdevs_discovered": 4, 00:12:28.174 "num_base_bdevs_operational": 4, 00:12:28.174 "process": { 00:12:28.174 "type": "rebuild", 00:12:28.174 "target": "spare", 00:12:28.174 "progress": { 00:12:28.174 "blocks": 20480, 00:12:28.174 "percent": 31 00:12:28.174 } 00:12:28.174 }, 00:12:28.174 "base_bdevs_list": [ 00:12:28.174 { 00:12:28.174 "name": "spare", 00:12:28.174 "uuid": "a377ad6f-a6ee-597b-a737-115227d8549f", 00:12:28.174 "is_configured": true, 00:12:28.174 "data_offset": 0, 00:12:28.174 "data_size": 65536 00:12:28.174 }, 00:12:28.174 { 00:12:28.174 "name": "BaseBdev2", 00:12:28.174 "uuid": "0c4634a1-37d0-53b5-a49b-fef737e17c77", 00:12:28.174 "is_configured": true, 00:12:28.174 "data_offset": 0, 00:12:28.174 "data_size": 65536 00:12:28.174 }, 00:12:28.174 { 00:12:28.174 "name": "BaseBdev3", 00:12:28.174 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:28.174 "is_configured": true, 00:12:28.174 "data_offset": 0, 00:12:28.174 "data_size": 65536 00:12:28.174 }, 00:12:28.174 { 00:12:28.174 "name": "BaseBdev4", 00:12:28.174 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:28.174 "is_configured": true, 00:12:28.174 "data_offset": 0, 00:12:28.174 "data_size": 65536 00:12:28.174 } 00:12:28.174 ] 00:12:28.174 }' 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.174 13:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.174 [2024-11-20 13:26:09.817253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.432 [2024-11-20 13:26:09.855097] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:28.432 [2024-11-20 13:26:09.855338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.432 [2024-11-20 13:26:09.855406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.432 [2024-11-20 13:26:09.855449] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.432 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.433 13:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.433 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.433 "name": "raid_bdev1", 00:12:28.433 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:28.433 "strip_size_kb": 0, 00:12:28.433 "state": "online", 00:12:28.433 "raid_level": "raid1", 00:12:28.433 "superblock": false, 00:12:28.433 "num_base_bdevs": 4, 00:12:28.433 "num_base_bdevs_discovered": 3, 00:12:28.433 "num_base_bdevs_operational": 3, 00:12:28.433 "base_bdevs_list": [ 00:12:28.433 { 00:12:28.433 "name": null, 00:12:28.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.433 "is_configured": false, 00:12:28.433 "data_offset": 0, 00:12:28.433 "data_size": 65536 00:12:28.433 }, 00:12:28.433 { 00:12:28.433 "name": "BaseBdev2", 00:12:28.433 "uuid": "0c4634a1-37d0-53b5-a49b-fef737e17c77", 00:12:28.433 "is_configured": true, 00:12:28.433 "data_offset": 0, 00:12:28.433 "data_size": 65536 00:12:28.433 }, 00:12:28.433 { 00:12:28.433 "name": "BaseBdev3", 00:12:28.433 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:28.433 "is_configured": true, 00:12:28.433 "data_offset": 0, 00:12:28.433 "data_size": 65536 00:12:28.433 }, 00:12:28.433 { 00:12:28.433 "name": "BaseBdev4", 00:12:28.433 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:28.433 "is_configured": true, 00:12:28.433 "data_offset": 0, 00:12:28.433 "data_size": 65536 00:12:28.433 } 00:12:28.433 ] 00:12:28.433 }' 00:12:28.433 13:26:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.433 13:26:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.691 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.691 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.691 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.691 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.691 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.691 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.691 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.691 13:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.691 13:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.691 13:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.950 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.950 "name": "raid_bdev1", 00:12:28.950 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:28.950 "strip_size_kb": 0, 00:12:28.950 "state": "online", 00:12:28.950 "raid_level": "raid1", 00:12:28.950 "superblock": false, 00:12:28.950 "num_base_bdevs": 4, 00:12:28.950 "num_base_bdevs_discovered": 3, 00:12:28.950 "num_base_bdevs_operational": 3, 00:12:28.950 "base_bdevs_list": [ 00:12:28.950 { 00:12:28.950 "name": null, 00:12:28.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.950 "is_configured": false, 00:12:28.950 "data_offset": 0, 00:12:28.950 "data_size": 65536 00:12:28.950 }, 00:12:28.950 { 00:12:28.950 "name": "BaseBdev2", 00:12:28.950 "uuid": "0c4634a1-37d0-53b5-a49b-fef737e17c77", 00:12:28.950 "is_configured": true, 00:12:28.950 "data_offset": 0, 00:12:28.950 "data_size": 65536 00:12:28.950 }, 00:12:28.950 { 00:12:28.950 "name": "BaseBdev3", 00:12:28.950 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:28.950 "is_configured": true, 00:12:28.950 "data_offset": 0, 00:12:28.950 "data_size": 65536 00:12:28.950 }, 00:12:28.950 { 00:12:28.950 "name": "BaseBdev4", 00:12:28.950 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:28.950 "is_configured": true, 00:12:28.950 "data_offset": 0, 00:12:28.950 "data_size": 65536 00:12:28.950 } 00:12:28.950 ] 00:12:28.950 }' 00:12:28.950 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.950 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.950 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.950 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.950 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:28.950 13:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.950 13:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.951 [2024-11-20 13:26:10.483780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:28.951 [2024-11-20 13:26:10.488318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:12:28.951 13:26:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.951 13:26:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:28.951 [2024-11-20 13:26:10.490685] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.889 "name": "raid_bdev1", 00:12:29.889 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:29.889 "strip_size_kb": 0, 00:12:29.889 "state": "online", 00:12:29.889 "raid_level": "raid1", 00:12:29.889 "superblock": false, 00:12:29.889 "num_base_bdevs": 4, 00:12:29.889 "num_base_bdevs_discovered": 4, 00:12:29.889 "num_base_bdevs_operational": 4, 00:12:29.889 "process": { 00:12:29.889 "type": "rebuild", 00:12:29.889 "target": "spare", 00:12:29.889 "progress": { 00:12:29.889 "blocks": 20480, 00:12:29.889 "percent": 31 00:12:29.889 } 00:12:29.889 }, 00:12:29.889 "base_bdevs_list": [ 00:12:29.889 { 00:12:29.889 "name": "spare", 00:12:29.889 "uuid": "a377ad6f-a6ee-597b-a737-115227d8549f", 00:12:29.889 "is_configured": true, 00:12:29.889 "data_offset": 0, 00:12:29.889 "data_size": 65536 00:12:29.889 }, 00:12:29.889 { 00:12:29.889 "name": "BaseBdev2", 00:12:29.889 "uuid": "0c4634a1-37d0-53b5-a49b-fef737e17c77", 00:12:29.889 "is_configured": true, 00:12:29.889 "data_offset": 0, 00:12:29.889 "data_size": 65536 00:12:29.889 }, 00:12:29.889 { 00:12:29.889 "name": "BaseBdev3", 00:12:29.889 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:29.889 "is_configured": true, 00:12:29.889 "data_offset": 0, 00:12:29.889 "data_size": 65536 00:12:29.889 }, 00:12:29.889 { 00:12:29.889 "name": "BaseBdev4", 00:12:29.889 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:29.889 "is_configured": true, 00:12:29.889 "data_offset": 0, 00:12:29.889 "data_size": 65536 00:12:29.889 } 00:12:29.889 ] 00:12:29.889 }' 00:12:29.889 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.148 [2024-11-20 13:26:11.647765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:30.148 [2024-11-20 13:26:11.696281] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.148 "name": "raid_bdev1", 00:12:30.148 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:30.148 "strip_size_kb": 0, 00:12:30.148 "state": "online", 00:12:30.148 "raid_level": "raid1", 00:12:30.148 "superblock": false, 00:12:30.148 "num_base_bdevs": 4, 00:12:30.148 "num_base_bdevs_discovered": 3, 00:12:30.148 "num_base_bdevs_operational": 3, 00:12:30.148 "process": { 00:12:30.148 "type": "rebuild", 00:12:30.148 "target": "spare", 00:12:30.148 "progress": { 00:12:30.148 "blocks": 24576, 00:12:30.148 "percent": 37 00:12:30.148 } 00:12:30.148 }, 00:12:30.148 "base_bdevs_list": [ 00:12:30.148 { 00:12:30.148 "name": "spare", 00:12:30.148 "uuid": "a377ad6f-a6ee-597b-a737-115227d8549f", 00:12:30.148 "is_configured": true, 00:12:30.148 "data_offset": 0, 00:12:30.148 "data_size": 65536 00:12:30.148 }, 00:12:30.148 { 00:12:30.148 "name": null, 00:12:30.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.148 "is_configured": false, 00:12:30.148 "data_offset": 0, 00:12:30.148 "data_size": 65536 00:12:30.148 }, 00:12:30.148 { 00:12:30.148 "name": "BaseBdev3", 00:12:30.148 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:30.148 "is_configured": true, 00:12:30.148 "data_offset": 0, 00:12:30.148 "data_size": 65536 00:12:30.148 }, 00:12:30.148 { 00:12:30.148 "name": "BaseBdev4", 00:12:30.148 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:30.148 "is_configured": true, 00:12:30.148 "data_offset": 0, 00:12:30.148 "data_size": 65536 00:12:30.148 } 00:12:30.148 ] 00:12:30.148 }' 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.148 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=360 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.407 "name": "raid_bdev1", 00:12:30.407 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:30.407 "strip_size_kb": 0, 00:12:30.407 "state": "online", 00:12:30.407 "raid_level": "raid1", 00:12:30.407 "superblock": false, 00:12:30.407 "num_base_bdevs": 4, 00:12:30.407 "num_base_bdevs_discovered": 3, 00:12:30.407 "num_base_bdevs_operational": 3, 00:12:30.407 "process": { 00:12:30.407 "type": "rebuild", 00:12:30.407 "target": "spare", 00:12:30.407 "progress": { 00:12:30.407 "blocks": 26624, 00:12:30.407 "percent": 40 00:12:30.407 } 00:12:30.407 }, 00:12:30.407 "base_bdevs_list": [ 00:12:30.407 { 00:12:30.407 "name": "spare", 00:12:30.407 "uuid": "a377ad6f-a6ee-597b-a737-115227d8549f", 00:12:30.407 "is_configured": true, 00:12:30.407 "data_offset": 0, 00:12:30.407 "data_size": 65536 00:12:30.407 }, 00:12:30.407 { 00:12:30.407 "name": null, 00:12:30.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.407 "is_configured": false, 00:12:30.407 "data_offset": 0, 00:12:30.407 "data_size": 65536 00:12:30.407 }, 00:12:30.407 { 00:12:30.407 "name": "BaseBdev3", 00:12:30.407 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:30.407 "is_configured": true, 00:12:30.407 "data_offset": 0, 00:12:30.407 "data_size": 65536 00:12:30.407 }, 00:12:30.407 { 00:12:30.407 "name": "BaseBdev4", 00:12:30.407 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:30.407 "is_configured": true, 00:12:30.407 "data_offset": 0, 00:12:30.407 "data_size": 65536 00:12:30.407 } 00:12:30.407 ] 00:12:30.407 }' 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.407 13:26:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:31.342 13:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:31.342 13:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.342 13:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.342 13:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.342 13:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.342 13:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.342 13:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.342 13:26:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.342 13:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.342 13:26:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.342 13:26:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.600 13:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.600 "name": "raid_bdev1", 00:12:31.600 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:31.600 "strip_size_kb": 0, 00:12:31.600 "state": "online", 00:12:31.600 "raid_level": "raid1", 00:12:31.600 "superblock": false, 00:12:31.600 "num_base_bdevs": 4, 00:12:31.600 "num_base_bdevs_discovered": 3, 00:12:31.600 "num_base_bdevs_operational": 3, 00:12:31.600 "process": { 00:12:31.600 "type": "rebuild", 00:12:31.600 "target": "spare", 00:12:31.600 "progress": { 00:12:31.600 "blocks": 49152, 00:12:31.600 "percent": 75 00:12:31.600 } 00:12:31.600 }, 00:12:31.600 "base_bdevs_list": [ 00:12:31.600 { 00:12:31.600 "name": "spare", 00:12:31.600 "uuid": "a377ad6f-a6ee-597b-a737-115227d8549f", 00:12:31.600 "is_configured": true, 00:12:31.600 "data_offset": 0, 00:12:31.600 "data_size": 65536 00:12:31.600 }, 00:12:31.600 { 00:12:31.600 "name": null, 00:12:31.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.600 "is_configured": false, 00:12:31.600 "data_offset": 0, 00:12:31.600 "data_size": 65536 00:12:31.600 }, 00:12:31.600 { 00:12:31.600 "name": "BaseBdev3", 00:12:31.600 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:31.600 "is_configured": true, 00:12:31.600 "data_offset": 0, 00:12:31.600 "data_size": 65536 00:12:31.600 }, 00:12:31.600 { 00:12:31.600 "name": "BaseBdev4", 00:12:31.600 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:31.600 "is_configured": true, 00:12:31.600 "data_offset": 0, 00:12:31.600 "data_size": 65536 00:12:31.600 } 00:12:31.600 ] 00:12:31.600 }' 00:12:31.600 13:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.600 13:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.601 13:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.601 13:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.601 13:26:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:32.166 [2024-11-20 13:26:13.706129] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:32.166 [2024-11-20 13:26:13.706282] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:32.166 [2024-11-20 13:26:13.706339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.735 "name": "raid_bdev1", 00:12:32.735 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:32.735 "strip_size_kb": 0, 00:12:32.735 "state": "online", 00:12:32.735 "raid_level": "raid1", 00:12:32.735 "superblock": false, 00:12:32.735 "num_base_bdevs": 4, 00:12:32.735 "num_base_bdevs_discovered": 3, 00:12:32.735 "num_base_bdevs_operational": 3, 00:12:32.735 "base_bdevs_list": [ 00:12:32.735 { 00:12:32.735 "name": "spare", 00:12:32.735 "uuid": "a377ad6f-a6ee-597b-a737-115227d8549f", 00:12:32.735 "is_configured": true, 00:12:32.735 "data_offset": 0, 00:12:32.735 "data_size": 65536 00:12:32.735 }, 00:12:32.735 { 00:12:32.735 "name": null, 00:12:32.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.735 "is_configured": false, 00:12:32.735 "data_offset": 0, 00:12:32.735 "data_size": 65536 00:12:32.735 }, 00:12:32.735 { 00:12:32.735 "name": "BaseBdev3", 00:12:32.735 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:32.735 "is_configured": true, 00:12:32.735 "data_offset": 0, 00:12:32.735 "data_size": 65536 00:12:32.735 }, 00:12:32.735 { 00:12:32.735 "name": "BaseBdev4", 00:12:32.735 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:32.735 "is_configured": true, 00:12:32.735 "data_offset": 0, 00:12:32.735 "data_size": 65536 00:12:32.735 } 00:12:32.735 ] 00:12:32.735 }' 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.735 "name": "raid_bdev1", 00:12:32.735 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:32.735 "strip_size_kb": 0, 00:12:32.735 "state": "online", 00:12:32.735 "raid_level": "raid1", 00:12:32.735 "superblock": false, 00:12:32.735 "num_base_bdevs": 4, 00:12:32.735 "num_base_bdevs_discovered": 3, 00:12:32.735 "num_base_bdevs_operational": 3, 00:12:32.735 "base_bdevs_list": [ 00:12:32.735 { 00:12:32.735 "name": "spare", 00:12:32.735 "uuid": "a377ad6f-a6ee-597b-a737-115227d8549f", 00:12:32.735 "is_configured": true, 00:12:32.735 "data_offset": 0, 00:12:32.735 "data_size": 65536 00:12:32.735 }, 00:12:32.735 { 00:12:32.735 "name": null, 00:12:32.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.735 "is_configured": false, 00:12:32.735 "data_offset": 0, 00:12:32.735 "data_size": 65536 00:12:32.735 }, 00:12:32.735 { 00:12:32.735 "name": "BaseBdev3", 00:12:32.735 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:32.735 "is_configured": true, 00:12:32.735 "data_offset": 0, 00:12:32.735 "data_size": 65536 00:12:32.735 }, 00:12:32.735 { 00:12:32.735 "name": "BaseBdev4", 00:12:32.735 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:32.735 "is_configured": true, 00:12:32.735 "data_offset": 0, 00:12:32.735 "data_size": 65536 00:12:32.735 } 00:12:32.735 ] 00:12:32.735 }' 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.735 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.994 "name": "raid_bdev1", 00:12:32.994 "uuid": "886fc26d-1cc1-4fd3-84f6-5a5adec5975a", 00:12:32.994 "strip_size_kb": 0, 00:12:32.994 "state": "online", 00:12:32.994 "raid_level": "raid1", 00:12:32.994 "superblock": false, 00:12:32.994 "num_base_bdevs": 4, 00:12:32.994 "num_base_bdevs_discovered": 3, 00:12:32.994 "num_base_bdevs_operational": 3, 00:12:32.994 "base_bdevs_list": [ 00:12:32.994 { 00:12:32.994 "name": "spare", 00:12:32.994 "uuid": "a377ad6f-a6ee-597b-a737-115227d8549f", 00:12:32.994 "is_configured": true, 00:12:32.994 "data_offset": 0, 00:12:32.994 "data_size": 65536 00:12:32.994 }, 00:12:32.994 { 00:12:32.994 "name": null, 00:12:32.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.994 "is_configured": false, 00:12:32.994 "data_offset": 0, 00:12:32.994 "data_size": 65536 00:12:32.994 }, 00:12:32.994 { 00:12:32.994 "name": "BaseBdev3", 00:12:32.994 "uuid": "91033d36-298b-514e-9cbe-15c2738f4a11", 00:12:32.994 "is_configured": true, 00:12:32.994 "data_offset": 0, 00:12:32.994 "data_size": 65536 00:12:32.994 }, 00:12:32.994 { 00:12:32.994 "name": "BaseBdev4", 00:12:32.994 "uuid": "b172790f-b81b-53bd-80ca-bbb9872e032e", 00:12:32.994 "is_configured": true, 00:12:32.994 "data_offset": 0, 00:12:32.994 "data_size": 65536 00:12:32.994 } 00:12:32.994 ] 00:12:32.994 }' 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.994 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.314 [2024-11-20 13:26:14.873084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.314 [2024-11-20 13:26:14.873213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.314 [2024-11-20 13:26:14.873370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.314 [2024-11-20 13:26:14.873500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.314 [2024-11-20 13:26:14.873556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:33.314 13:26:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:33.315 13:26:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:33.573 /dev/nbd0 00:12:33.573 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:33.573 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:33.573 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:33.573 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:33.573 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.574 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.574 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:33.574 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:33.574 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.574 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.574 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.832 1+0 records in 00:12:33.832 1+0 records out 00:12:33.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670249 s, 6.1 MB/s 00:12:33.832 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.832 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:33.832 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.832 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.833 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:33.833 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.833 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:33.833 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:34.091 /dev/nbd1 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.091 1+0 records in 00:12:34.091 1+0 records out 00:12:34.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443414 s, 9.2 MB/s 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.091 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:34.349 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:34.349 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:34.349 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:34.349 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.349 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.349 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:34.349 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:34.349 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.349 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.349 13:26:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:34.606 13:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:34.606 13:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:34.607 13:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:34.607 13:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.607 13:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.607 13:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:34.607 13:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:34.607 13:26:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.607 13:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:34.607 13:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87880 00:12:34.607 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 87880 ']' 00:12:34.607 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 87880 00:12:34.865 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:34.865 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.865 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87880 00:12:34.865 killing process with pid 87880 00:12:34.865 Received shutdown signal, test time was about 60.000000 seconds 00:12:34.865 00:12:34.865 Latency(us) 00:12:34.865 [2024-11-20T13:26:16.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.865 [2024-11-20T13:26:16.533Z] =================================================================================================================== 00:12:34.865 [2024-11-20T13:26:16.533Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:34.865 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.865 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.865 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87880' 00:12:34.865 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 87880 00:12:34.865 [2024-11-20 13:26:16.311629] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.865 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 87880 00:12:34.865 [2024-11-20 13:26:16.365900] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:35.124 00:12:35.124 real 0m17.680s 00:12:35.124 user 0m20.048s 00:12:35.124 sys 0m3.600s 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.124 ************************************ 00:12:35.124 END TEST raid_rebuild_test 00:12:35.124 ************************************ 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.124 13:26:16 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:35.124 13:26:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:35.124 13:26:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.124 13:26:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:35.124 ************************************ 00:12:35.124 START TEST raid_rebuild_test_sb 00:12:35.124 ************************************ 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88327 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88327 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 88327 ']' 00:12:35.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.124 13:26:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.124 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:35.124 Zero copy mechanism will not be used. 00:12:35.124 [2024-11-20 13:26:16.764988] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:12:35.124 [2024-11-20 13:26:16.765140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88327 ] 00:12:35.384 [2024-11-20 13:26:16.923724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.384 [2024-11-20 13:26:16.954026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.384 [2024-11-20 13:26:16.998921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.384 [2024-11-20 13:26:16.998977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.322 BaseBdev1_malloc 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.322 [2024-11-20 13:26:17.695089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:36.322 [2024-11-20 13:26:17.695254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.322 [2024-11-20 13:26:17.695325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:36.322 [2024-11-20 13:26:17.695396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.322 [2024-11-20 13:26:17.698099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.322 [2024-11-20 13:26:17.698207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:36.322 BaseBdev1 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.322 BaseBdev2_malloc 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.322 [2024-11-20 13:26:17.724676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:36.322 [2024-11-20 13:26:17.724841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.322 [2024-11-20 13:26:17.724900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:36.322 [2024-11-20 13:26:17.724938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.322 [2024-11-20 13:26:17.727550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.322 [2024-11-20 13:26:17.727688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:36.322 BaseBdev2 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:36.322 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 BaseBdev3_malloc 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 [2024-11-20 13:26:17.754032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:36.323 [2024-11-20 13:26:17.754175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.323 [2024-11-20 13:26:17.754235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:36.323 [2024-11-20 13:26:17.754272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.323 [2024-11-20 13:26:17.756789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.323 [2024-11-20 13:26:17.756895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:36.323 BaseBdev3 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 BaseBdev4_malloc 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 [2024-11-20 13:26:17.794432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:36.323 [2024-11-20 13:26:17.794580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.323 [2024-11-20 13:26:17.794614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:36.323 [2024-11-20 13:26:17.794625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.323 [2024-11-20 13:26:17.797243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.323 BaseBdev4 00:12:36.323 [2024-11-20 13:26:17.797345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 spare_malloc 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 spare_delay 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 [2024-11-20 13:26:17.835558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:36.323 [2024-11-20 13:26:17.835727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.323 [2024-11-20 13:26:17.835786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:36.323 [2024-11-20 13:26:17.835822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.323 [2024-11-20 13:26:17.838302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.323 [2024-11-20 13:26:17.838400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:36.323 spare 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 [2024-11-20 13:26:17.847668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.323 [2024-11-20 13:26:17.849843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.323 [2024-11-20 13:26:17.849986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.323 [2024-11-20 13:26:17.850101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:36.323 [2024-11-20 13:26:17.850345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:36.323 [2024-11-20 13:26:17.850403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.323 [2024-11-20 13:26:17.850770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:36.323 [2024-11-20 13:26:17.850979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:36.323 [2024-11-20 13:26:17.851045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:36.323 [2024-11-20 13:26:17.851257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.323 "name": "raid_bdev1", 00:12:36.323 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:36.323 "strip_size_kb": 0, 00:12:36.323 "state": "online", 00:12:36.323 "raid_level": "raid1", 00:12:36.323 "superblock": true, 00:12:36.323 "num_base_bdevs": 4, 00:12:36.323 "num_base_bdevs_discovered": 4, 00:12:36.323 "num_base_bdevs_operational": 4, 00:12:36.323 "base_bdevs_list": [ 00:12:36.323 { 00:12:36.323 "name": "BaseBdev1", 00:12:36.323 "uuid": "73680b08-210c-50dc-b967-2fdee362a9f5", 00:12:36.323 "is_configured": true, 00:12:36.323 "data_offset": 2048, 00:12:36.323 "data_size": 63488 00:12:36.323 }, 00:12:36.323 { 00:12:36.323 "name": "BaseBdev2", 00:12:36.323 "uuid": "d5b84e3e-2cc0-5274-95af-6492afa9dee0", 00:12:36.323 "is_configured": true, 00:12:36.323 "data_offset": 2048, 00:12:36.323 "data_size": 63488 00:12:36.323 }, 00:12:36.323 { 00:12:36.323 "name": "BaseBdev3", 00:12:36.323 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:36.323 "is_configured": true, 00:12:36.323 "data_offset": 2048, 00:12:36.323 "data_size": 63488 00:12:36.323 }, 00:12:36.323 { 00:12:36.323 "name": "BaseBdev4", 00:12:36.323 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:36.323 "is_configured": true, 00:12:36.323 "data_offset": 2048, 00:12:36.323 "data_size": 63488 00:12:36.323 } 00:12:36.323 ] 00:12:36.323 }' 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.323 13:26:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:36.890 [2024-11-20 13:26:18.339115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.890 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:37.149 [2024-11-20 13:26:18.654288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:37.149 /dev/nbd0 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.149 1+0 records in 00:12:37.149 1+0 records out 00:12:37.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402797 s, 10.2 MB/s 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:37.149 13:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:43.738 63488+0 records in 00:12:43.738 63488+0 records out 00:12:43.738 32505856 bytes (33 MB, 31 MiB) copied, 6.40456 s, 5.1 MB/s 00:12:43.738 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:43.738 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:43.738 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:43.738 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.738 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:43.738 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.738 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:43.738 [2024-11-20 13:26:25.398342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.996 [2024-11-20 13:26:25.438368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:43.996 13:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.997 "name": "raid_bdev1", 00:12:43.997 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:43.997 "strip_size_kb": 0, 00:12:43.997 "state": "online", 00:12:43.997 "raid_level": "raid1", 00:12:43.997 "superblock": true, 00:12:43.997 "num_base_bdevs": 4, 00:12:43.997 "num_base_bdevs_discovered": 3, 00:12:43.997 "num_base_bdevs_operational": 3, 00:12:43.997 "base_bdevs_list": [ 00:12:43.997 { 00:12:43.997 "name": null, 00:12:43.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.997 "is_configured": false, 00:12:43.997 "data_offset": 0, 00:12:43.997 "data_size": 63488 00:12:43.997 }, 00:12:43.997 { 00:12:43.997 "name": "BaseBdev2", 00:12:43.997 "uuid": "d5b84e3e-2cc0-5274-95af-6492afa9dee0", 00:12:43.997 "is_configured": true, 00:12:43.997 "data_offset": 2048, 00:12:43.997 "data_size": 63488 00:12:43.997 }, 00:12:43.997 { 00:12:43.997 "name": "BaseBdev3", 00:12:43.997 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:43.997 "is_configured": true, 00:12:43.997 "data_offset": 2048, 00:12:43.997 "data_size": 63488 00:12:43.997 }, 00:12:43.997 { 00:12:43.997 "name": "BaseBdev4", 00:12:43.997 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:43.997 "is_configured": true, 00:12:43.997 "data_offset": 2048, 00:12:43.997 "data_size": 63488 00:12:43.997 } 00:12:43.997 ] 00:12:43.997 }' 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.997 13:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.565 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:44.565 13:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.565 13:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.565 [2024-11-20 13:26:25.945589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.565 [2024-11-20 13:26:25.950052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:12:44.565 13:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.565 13:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:44.565 [2024-11-20 13:26:25.952403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:45.503 13:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.503 13:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.503 13:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.503 13:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.503 13:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.503 13:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.503 13:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.503 13:26:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.503 13:26:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.503 13:26:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.503 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.503 "name": "raid_bdev1", 00:12:45.503 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:45.503 "strip_size_kb": 0, 00:12:45.503 "state": "online", 00:12:45.503 "raid_level": "raid1", 00:12:45.503 "superblock": true, 00:12:45.503 "num_base_bdevs": 4, 00:12:45.503 "num_base_bdevs_discovered": 4, 00:12:45.503 "num_base_bdevs_operational": 4, 00:12:45.503 "process": { 00:12:45.503 "type": "rebuild", 00:12:45.503 "target": "spare", 00:12:45.503 "progress": { 00:12:45.503 "blocks": 20480, 00:12:45.503 "percent": 32 00:12:45.503 } 00:12:45.503 }, 00:12:45.503 "base_bdevs_list": [ 00:12:45.503 { 00:12:45.503 "name": "spare", 00:12:45.503 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:45.503 "is_configured": true, 00:12:45.503 "data_offset": 2048, 00:12:45.503 "data_size": 63488 00:12:45.503 }, 00:12:45.503 { 00:12:45.503 "name": "BaseBdev2", 00:12:45.503 "uuid": "d5b84e3e-2cc0-5274-95af-6492afa9dee0", 00:12:45.503 "is_configured": true, 00:12:45.503 "data_offset": 2048, 00:12:45.503 "data_size": 63488 00:12:45.503 }, 00:12:45.503 { 00:12:45.503 "name": "BaseBdev3", 00:12:45.503 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:45.503 "is_configured": true, 00:12:45.503 "data_offset": 2048, 00:12:45.503 "data_size": 63488 00:12:45.503 }, 00:12:45.503 { 00:12:45.503 "name": "BaseBdev4", 00:12:45.503 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:45.503 "is_configured": true, 00:12:45.503 "data_offset": 2048, 00:12:45.503 "data_size": 63488 00:12:45.503 } 00:12:45.503 ] 00:12:45.503 }' 00:12:45.503 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.503 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.503 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.503 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.503 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:45.503 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.503 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.503 [2024-11-20 13:26:27.121447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.503 [2024-11-20 13:26:27.158693] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:45.503 [2024-11-20 13:26:27.158890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.503 [2024-11-20 13:26:27.158945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.503 [2024-11-20 13:26:27.159020] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:45.503 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.503 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:45.762 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.762 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.762 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.762 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.762 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.762 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.762 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.762 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.762 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.762 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.762 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.763 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.763 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.763 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.763 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.763 "name": "raid_bdev1", 00:12:45.763 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:45.763 "strip_size_kb": 0, 00:12:45.763 "state": "online", 00:12:45.763 "raid_level": "raid1", 00:12:45.763 "superblock": true, 00:12:45.763 "num_base_bdevs": 4, 00:12:45.763 "num_base_bdevs_discovered": 3, 00:12:45.763 "num_base_bdevs_operational": 3, 00:12:45.763 "base_bdevs_list": [ 00:12:45.763 { 00:12:45.763 "name": null, 00:12:45.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.763 "is_configured": false, 00:12:45.763 "data_offset": 0, 00:12:45.763 "data_size": 63488 00:12:45.763 }, 00:12:45.763 { 00:12:45.763 "name": "BaseBdev2", 00:12:45.763 "uuid": "d5b84e3e-2cc0-5274-95af-6492afa9dee0", 00:12:45.763 "is_configured": true, 00:12:45.763 "data_offset": 2048, 00:12:45.763 "data_size": 63488 00:12:45.763 }, 00:12:45.763 { 00:12:45.763 "name": "BaseBdev3", 00:12:45.763 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:45.763 "is_configured": true, 00:12:45.763 "data_offset": 2048, 00:12:45.763 "data_size": 63488 00:12:45.763 }, 00:12:45.763 { 00:12:45.763 "name": "BaseBdev4", 00:12:45.763 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:45.763 "is_configured": true, 00:12:45.763 "data_offset": 2048, 00:12:45.763 "data_size": 63488 00:12:45.763 } 00:12:45.763 ] 00:12:45.763 }' 00:12:45.763 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.763 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.022 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:46.022 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.022 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:46.022 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:46.022 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.022 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.022 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.022 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.022 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.022 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.281 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.281 "name": "raid_bdev1", 00:12:46.281 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:46.281 "strip_size_kb": 0, 00:12:46.281 "state": "online", 00:12:46.281 "raid_level": "raid1", 00:12:46.281 "superblock": true, 00:12:46.281 "num_base_bdevs": 4, 00:12:46.281 "num_base_bdevs_discovered": 3, 00:12:46.281 "num_base_bdevs_operational": 3, 00:12:46.281 "base_bdevs_list": [ 00:12:46.281 { 00:12:46.281 "name": null, 00:12:46.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.281 "is_configured": false, 00:12:46.281 "data_offset": 0, 00:12:46.281 "data_size": 63488 00:12:46.281 }, 00:12:46.281 { 00:12:46.281 "name": "BaseBdev2", 00:12:46.281 "uuid": "d5b84e3e-2cc0-5274-95af-6492afa9dee0", 00:12:46.281 "is_configured": true, 00:12:46.281 "data_offset": 2048, 00:12:46.281 "data_size": 63488 00:12:46.281 }, 00:12:46.281 { 00:12:46.281 "name": "BaseBdev3", 00:12:46.281 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:46.281 "is_configured": true, 00:12:46.281 "data_offset": 2048, 00:12:46.281 "data_size": 63488 00:12:46.281 }, 00:12:46.281 { 00:12:46.281 "name": "BaseBdev4", 00:12:46.281 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:46.281 "is_configured": true, 00:12:46.281 "data_offset": 2048, 00:12:46.281 "data_size": 63488 00:12:46.281 } 00:12:46.281 ] 00:12:46.281 }' 00:12:46.281 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.281 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:46.281 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.281 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.281 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.281 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.281 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.281 [2024-11-20 13:26:27.799116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.281 [2024-11-20 13:26:27.803578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:12:46.281 13:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.281 13:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:46.281 [2024-11-20 13:26:27.805817] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:47.218 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.218 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.218 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.218 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.218 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.218 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.218 13:26:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.219 13:26:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.219 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.219 13:26:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.219 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.219 "name": "raid_bdev1", 00:12:47.219 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:47.219 "strip_size_kb": 0, 00:12:47.219 "state": "online", 00:12:47.219 "raid_level": "raid1", 00:12:47.219 "superblock": true, 00:12:47.219 "num_base_bdevs": 4, 00:12:47.219 "num_base_bdevs_discovered": 4, 00:12:47.219 "num_base_bdevs_operational": 4, 00:12:47.219 "process": { 00:12:47.219 "type": "rebuild", 00:12:47.219 "target": "spare", 00:12:47.219 "progress": { 00:12:47.219 "blocks": 20480, 00:12:47.219 "percent": 32 00:12:47.219 } 00:12:47.219 }, 00:12:47.219 "base_bdevs_list": [ 00:12:47.219 { 00:12:47.219 "name": "spare", 00:12:47.219 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:47.219 "is_configured": true, 00:12:47.219 "data_offset": 2048, 00:12:47.219 "data_size": 63488 00:12:47.219 }, 00:12:47.219 { 00:12:47.219 "name": "BaseBdev2", 00:12:47.219 "uuid": "d5b84e3e-2cc0-5274-95af-6492afa9dee0", 00:12:47.219 "is_configured": true, 00:12:47.219 "data_offset": 2048, 00:12:47.219 "data_size": 63488 00:12:47.219 }, 00:12:47.219 { 00:12:47.219 "name": "BaseBdev3", 00:12:47.219 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:47.219 "is_configured": true, 00:12:47.219 "data_offset": 2048, 00:12:47.219 "data_size": 63488 00:12:47.219 }, 00:12:47.219 { 00:12:47.219 "name": "BaseBdev4", 00:12:47.219 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:47.219 "is_configured": true, 00:12:47.219 "data_offset": 2048, 00:12:47.219 "data_size": 63488 00:12:47.219 } 00:12:47.219 ] 00:12:47.219 }' 00:12:47.219 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.477 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.477 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.477 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.477 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:47.477 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:47.477 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:47.477 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:47.477 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:47.477 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:47.477 13:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:47.477 13:26:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.477 13:26:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.477 [2024-11-20 13:26:28.962235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.477 [2024-11-20 13:26:29.111217] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.477 13:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.736 "name": "raid_bdev1", 00:12:47.736 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:47.736 "strip_size_kb": 0, 00:12:47.736 "state": "online", 00:12:47.736 "raid_level": "raid1", 00:12:47.736 "superblock": true, 00:12:47.736 "num_base_bdevs": 4, 00:12:47.736 "num_base_bdevs_discovered": 3, 00:12:47.736 "num_base_bdevs_operational": 3, 00:12:47.736 "process": { 00:12:47.736 "type": "rebuild", 00:12:47.736 "target": "spare", 00:12:47.736 "progress": { 00:12:47.736 "blocks": 24576, 00:12:47.736 "percent": 38 00:12:47.736 } 00:12:47.736 }, 00:12:47.736 "base_bdevs_list": [ 00:12:47.736 { 00:12:47.736 "name": "spare", 00:12:47.736 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:47.736 "is_configured": true, 00:12:47.736 "data_offset": 2048, 00:12:47.736 "data_size": 63488 00:12:47.736 }, 00:12:47.736 { 00:12:47.736 "name": null, 00:12:47.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.736 "is_configured": false, 00:12:47.736 "data_offset": 0, 00:12:47.736 "data_size": 63488 00:12:47.736 }, 00:12:47.736 { 00:12:47.736 "name": "BaseBdev3", 00:12:47.736 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:47.736 "is_configured": true, 00:12:47.736 "data_offset": 2048, 00:12:47.736 "data_size": 63488 00:12:47.736 }, 00:12:47.736 { 00:12:47.736 "name": "BaseBdev4", 00:12:47.736 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:47.736 "is_configured": true, 00:12:47.736 "data_offset": 2048, 00:12:47.736 "data_size": 63488 00:12:47.736 } 00:12:47.736 ] 00:12:47.736 }' 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=378 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.736 "name": "raid_bdev1", 00:12:47.736 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:47.736 "strip_size_kb": 0, 00:12:47.736 "state": "online", 00:12:47.736 "raid_level": "raid1", 00:12:47.736 "superblock": true, 00:12:47.736 "num_base_bdevs": 4, 00:12:47.736 "num_base_bdevs_discovered": 3, 00:12:47.736 "num_base_bdevs_operational": 3, 00:12:47.736 "process": { 00:12:47.736 "type": "rebuild", 00:12:47.736 "target": "spare", 00:12:47.736 "progress": { 00:12:47.736 "blocks": 26624, 00:12:47.736 "percent": 41 00:12:47.736 } 00:12:47.736 }, 00:12:47.736 "base_bdevs_list": [ 00:12:47.736 { 00:12:47.736 "name": "spare", 00:12:47.736 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:47.736 "is_configured": true, 00:12:47.736 "data_offset": 2048, 00:12:47.736 "data_size": 63488 00:12:47.736 }, 00:12:47.736 { 00:12:47.736 "name": null, 00:12:47.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.736 "is_configured": false, 00:12:47.736 "data_offset": 0, 00:12:47.736 "data_size": 63488 00:12:47.736 }, 00:12:47.736 { 00:12:47.736 "name": "BaseBdev3", 00:12:47.736 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:47.736 "is_configured": true, 00:12:47.736 "data_offset": 2048, 00:12:47.736 "data_size": 63488 00:12:47.736 }, 00:12:47.736 { 00:12:47.736 "name": "BaseBdev4", 00:12:47.736 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:47.736 "is_configured": true, 00:12:47.736 "data_offset": 2048, 00:12:47.736 "data_size": 63488 00:12:47.736 } 00:12:47.736 ] 00:12:47.736 }' 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.736 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.996 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.996 13:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.933 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.933 "name": "raid_bdev1", 00:12:48.933 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:48.933 "strip_size_kb": 0, 00:12:48.933 "state": "online", 00:12:48.933 "raid_level": "raid1", 00:12:48.933 "superblock": true, 00:12:48.933 "num_base_bdevs": 4, 00:12:48.933 "num_base_bdevs_discovered": 3, 00:12:48.933 "num_base_bdevs_operational": 3, 00:12:48.933 "process": { 00:12:48.933 "type": "rebuild", 00:12:48.933 "target": "spare", 00:12:48.933 "progress": { 00:12:48.933 "blocks": 51200, 00:12:48.933 "percent": 80 00:12:48.933 } 00:12:48.933 }, 00:12:48.933 "base_bdevs_list": [ 00:12:48.933 { 00:12:48.933 "name": "spare", 00:12:48.933 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:48.933 "is_configured": true, 00:12:48.933 "data_offset": 2048, 00:12:48.933 "data_size": 63488 00:12:48.933 }, 00:12:48.933 { 00:12:48.933 "name": null, 00:12:48.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.933 "is_configured": false, 00:12:48.933 "data_offset": 0, 00:12:48.933 "data_size": 63488 00:12:48.933 }, 00:12:48.933 { 00:12:48.933 "name": "BaseBdev3", 00:12:48.933 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:48.933 "is_configured": true, 00:12:48.933 "data_offset": 2048, 00:12:48.933 "data_size": 63488 00:12:48.933 }, 00:12:48.933 { 00:12:48.933 "name": "BaseBdev4", 00:12:48.933 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:48.933 "is_configured": true, 00:12:48.933 "data_offset": 2048, 00:12:48.933 "data_size": 63488 00:12:48.933 } 00:12:48.933 ] 00:12:48.934 }' 00:12:48.934 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.934 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.934 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.934 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.934 13:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.502 [2024-11-20 13:26:31.020335] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:49.502 [2024-11-20 13:26:31.020550] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:49.502 [2024-11-20 13:26:31.020733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.070 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.070 "name": "raid_bdev1", 00:12:50.070 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:50.070 "strip_size_kb": 0, 00:12:50.070 "state": "online", 00:12:50.070 "raid_level": "raid1", 00:12:50.070 "superblock": true, 00:12:50.070 "num_base_bdevs": 4, 00:12:50.070 "num_base_bdevs_discovered": 3, 00:12:50.070 "num_base_bdevs_operational": 3, 00:12:50.070 "base_bdevs_list": [ 00:12:50.070 { 00:12:50.070 "name": "spare", 00:12:50.070 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:50.070 "is_configured": true, 00:12:50.070 "data_offset": 2048, 00:12:50.070 "data_size": 63488 00:12:50.070 }, 00:12:50.070 { 00:12:50.070 "name": null, 00:12:50.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.070 "is_configured": false, 00:12:50.071 "data_offset": 0, 00:12:50.071 "data_size": 63488 00:12:50.071 }, 00:12:50.071 { 00:12:50.071 "name": "BaseBdev3", 00:12:50.071 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:50.071 "is_configured": true, 00:12:50.071 "data_offset": 2048, 00:12:50.071 "data_size": 63488 00:12:50.071 }, 00:12:50.071 { 00:12:50.071 "name": "BaseBdev4", 00:12:50.071 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:50.071 "is_configured": true, 00:12:50.071 "data_offset": 2048, 00:12:50.071 "data_size": 63488 00:12:50.071 } 00:12:50.071 ] 00:12:50.071 }' 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.071 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.332 "name": "raid_bdev1", 00:12:50.332 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:50.332 "strip_size_kb": 0, 00:12:50.332 "state": "online", 00:12:50.332 "raid_level": "raid1", 00:12:50.332 "superblock": true, 00:12:50.332 "num_base_bdevs": 4, 00:12:50.332 "num_base_bdevs_discovered": 3, 00:12:50.332 "num_base_bdevs_operational": 3, 00:12:50.332 "base_bdevs_list": [ 00:12:50.332 { 00:12:50.332 "name": "spare", 00:12:50.332 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:50.332 "is_configured": true, 00:12:50.332 "data_offset": 2048, 00:12:50.332 "data_size": 63488 00:12:50.332 }, 00:12:50.332 { 00:12:50.332 "name": null, 00:12:50.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.332 "is_configured": false, 00:12:50.332 "data_offset": 0, 00:12:50.332 "data_size": 63488 00:12:50.332 }, 00:12:50.332 { 00:12:50.332 "name": "BaseBdev3", 00:12:50.332 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:50.332 "is_configured": true, 00:12:50.332 "data_offset": 2048, 00:12:50.332 "data_size": 63488 00:12:50.332 }, 00:12:50.332 { 00:12:50.332 "name": "BaseBdev4", 00:12:50.332 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:50.332 "is_configured": true, 00:12:50.332 "data_offset": 2048, 00:12:50.332 "data_size": 63488 00:12:50.332 } 00:12:50.332 ] 00:12:50.332 }' 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.332 "name": "raid_bdev1", 00:12:50.332 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:50.332 "strip_size_kb": 0, 00:12:50.332 "state": "online", 00:12:50.332 "raid_level": "raid1", 00:12:50.332 "superblock": true, 00:12:50.332 "num_base_bdevs": 4, 00:12:50.332 "num_base_bdevs_discovered": 3, 00:12:50.332 "num_base_bdevs_operational": 3, 00:12:50.332 "base_bdevs_list": [ 00:12:50.332 { 00:12:50.332 "name": "spare", 00:12:50.332 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:50.332 "is_configured": true, 00:12:50.332 "data_offset": 2048, 00:12:50.332 "data_size": 63488 00:12:50.332 }, 00:12:50.332 { 00:12:50.332 "name": null, 00:12:50.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.332 "is_configured": false, 00:12:50.332 "data_offset": 0, 00:12:50.332 "data_size": 63488 00:12:50.332 }, 00:12:50.332 { 00:12:50.332 "name": "BaseBdev3", 00:12:50.332 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:50.332 "is_configured": true, 00:12:50.332 "data_offset": 2048, 00:12:50.332 "data_size": 63488 00:12:50.332 }, 00:12:50.332 { 00:12:50.332 "name": "BaseBdev4", 00:12:50.332 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:50.332 "is_configured": true, 00:12:50.332 "data_offset": 2048, 00:12:50.332 "data_size": 63488 00:12:50.332 } 00:12:50.332 ] 00:12:50.332 }' 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.332 13:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.907 [2024-11-20 13:26:32.351184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.907 [2024-11-20 13:26:32.351281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.907 [2024-11-20 13:26:32.351430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.907 [2024-11-20 13:26:32.351550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.907 [2024-11-20 13:26:32.351640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:50.907 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:51.166 /dev/nbd0 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.166 1+0 records in 00:12:51.166 1+0 records out 00:12:51.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295418 s, 13.9 MB/s 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.166 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:51.167 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.167 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.167 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:51.426 /dev/nbd1 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.426 1+0 records in 00:12:51.426 1+0 records out 00:12:51.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315995 s, 13.0 MB/s 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.426 13:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:51.426 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:51.426 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.426 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:51.426 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.426 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:51.426 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.426 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:51.685 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:51.685 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:51.685 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:51.685 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.685 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.685 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:51.685 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:51.685 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.685 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.685 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.944 [2024-11-20 13:26:33.568641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:51.944 [2024-11-20 13:26:33.568774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.944 [2024-11-20 13:26:33.568802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:51.944 [2024-11-20 13:26:33.568817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.944 [2024-11-20 13:26:33.571322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.944 [2024-11-20 13:26:33.571367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:51.944 [2024-11-20 13:26:33.571472] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:51.944 [2024-11-20 13:26:33.571526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.944 [2024-11-20 13:26:33.571665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.944 [2024-11-20 13:26:33.571775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:51.944 spare 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.944 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.203 [2024-11-20 13:26:33.671683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:52.203 [2024-11-20 13:26:33.671839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:52.203 [2024-11-20 13:26:33.672241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:12:52.203 [2024-11-20 13:26:33.672487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:52.203 [2024-11-20 13:26:33.672536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:52.203 [2024-11-20 13:26:33.672767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.203 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.204 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.204 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.204 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.204 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.204 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.204 "name": "raid_bdev1", 00:12:52.204 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:52.204 "strip_size_kb": 0, 00:12:52.204 "state": "online", 00:12:52.204 "raid_level": "raid1", 00:12:52.204 "superblock": true, 00:12:52.204 "num_base_bdevs": 4, 00:12:52.204 "num_base_bdevs_discovered": 3, 00:12:52.204 "num_base_bdevs_operational": 3, 00:12:52.204 "base_bdevs_list": [ 00:12:52.204 { 00:12:52.204 "name": "spare", 00:12:52.204 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:52.204 "is_configured": true, 00:12:52.204 "data_offset": 2048, 00:12:52.204 "data_size": 63488 00:12:52.204 }, 00:12:52.204 { 00:12:52.204 "name": null, 00:12:52.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.204 "is_configured": false, 00:12:52.204 "data_offset": 2048, 00:12:52.204 "data_size": 63488 00:12:52.204 }, 00:12:52.204 { 00:12:52.204 "name": "BaseBdev3", 00:12:52.204 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:52.204 "is_configured": true, 00:12:52.204 "data_offset": 2048, 00:12:52.204 "data_size": 63488 00:12:52.204 }, 00:12:52.204 { 00:12:52.204 "name": "BaseBdev4", 00:12:52.204 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:52.204 "is_configured": true, 00:12:52.204 "data_offset": 2048, 00:12:52.204 "data_size": 63488 00:12:52.204 } 00:12:52.204 ] 00:12:52.204 }' 00:12:52.204 13:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.204 13:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.771 "name": "raid_bdev1", 00:12:52.771 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:52.771 "strip_size_kb": 0, 00:12:52.771 "state": "online", 00:12:52.771 "raid_level": "raid1", 00:12:52.771 "superblock": true, 00:12:52.771 "num_base_bdevs": 4, 00:12:52.771 "num_base_bdevs_discovered": 3, 00:12:52.771 "num_base_bdevs_operational": 3, 00:12:52.771 "base_bdevs_list": [ 00:12:52.771 { 00:12:52.771 "name": "spare", 00:12:52.771 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:52.771 "is_configured": true, 00:12:52.771 "data_offset": 2048, 00:12:52.771 "data_size": 63488 00:12:52.771 }, 00:12:52.771 { 00:12:52.771 "name": null, 00:12:52.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.771 "is_configured": false, 00:12:52.771 "data_offset": 2048, 00:12:52.771 "data_size": 63488 00:12:52.771 }, 00:12:52.771 { 00:12:52.771 "name": "BaseBdev3", 00:12:52.771 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:52.771 "is_configured": true, 00:12:52.771 "data_offset": 2048, 00:12:52.771 "data_size": 63488 00:12:52.771 }, 00:12:52.771 { 00:12:52.771 "name": "BaseBdev4", 00:12:52.771 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:52.771 "is_configured": true, 00:12:52.771 "data_offset": 2048, 00:12:52.771 "data_size": 63488 00:12:52.771 } 00:12:52.771 ] 00:12:52.771 }' 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.771 [2024-11-20 13:26:34.355680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.771 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.771 "name": "raid_bdev1", 00:12:52.771 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:52.771 "strip_size_kb": 0, 00:12:52.771 "state": "online", 00:12:52.771 "raid_level": "raid1", 00:12:52.771 "superblock": true, 00:12:52.771 "num_base_bdevs": 4, 00:12:52.771 "num_base_bdevs_discovered": 2, 00:12:52.771 "num_base_bdevs_operational": 2, 00:12:52.771 "base_bdevs_list": [ 00:12:52.771 { 00:12:52.771 "name": null, 00:12:52.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.772 "is_configured": false, 00:12:52.772 "data_offset": 0, 00:12:52.772 "data_size": 63488 00:12:52.772 }, 00:12:52.772 { 00:12:52.772 "name": null, 00:12:52.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.772 "is_configured": false, 00:12:52.772 "data_offset": 2048, 00:12:52.772 "data_size": 63488 00:12:52.772 }, 00:12:52.772 { 00:12:52.772 "name": "BaseBdev3", 00:12:52.772 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:52.772 "is_configured": true, 00:12:52.772 "data_offset": 2048, 00:12:52.772 "data_size": 63488 00:12:52.772 }, 00:12:52.772 { 00:12:52.772 "name": "BaseBdev4", 00:12:52.772 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:52.772 "is_configured": true, 00:12:52.772 "data_offset": 2048, 00:12:52.772 "data_size": 63488 00:12:52.772 } 00:12:52.772 ] 00:12:52.772 }' 00:12:52.772 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.772 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.339 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:53.339 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.339 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.339 [2024-11-20 13:26:34.862840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.339 [2024-11-20 13:26:34.863149] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:53.339 [2024-11-20 13:26:34.863227] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:53.339 [2024-11-20 13:26:34.863302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.339 [2024-11-20 13:26:34.867547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:12:53.339 13:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.339 13:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:53.339 [2024-11-20 13:26:34.869786] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.275 "name": "raid_bdev1", 00:12:54.275 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:54.275 "strip_size_kb": 0, 00:12:54.275 "state": "online", 00:12:54.275 "raid_level": "raid1", 00:12:54.275 "superblock": true, 00:12:54.275 "num_base_bdevs": 4, 00:12:54.275 "num_base_bdevs_discovered": 3, 00:12:54.275 "num_base_bdevs_operational": 3, 00:12:54.275 "process": { 00:12:54.275 "type": "rebuild", 00:12:54.275 "target": "spare", 00:12:54.275 "progress": { 00:12:54.275 "blocks": 20480, 00:12:54.275 "percent": 32 00:12:54.275 } 00:12:54.275 }, 00:12:54.275 "base_bdevs_list": [ 00:12:54.275 { 00:12:54.275 "name": "spare", 00:12:54.275 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:54.275 "is_configured": true, 00:12:54.275 "data_offset": 2048, 00:12:54.275 "data_size": 63488 00:12:54.275 }, 00:12:54.275 { 00:12:54.275 "name": null, 00:12:54.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.275 "is_configured": false, 00:12:54.275 "data_offset": 2048, 00:12:54.275 "data_size": 63488 00:12:54.275 }, 00:12:54.275 { 00:12:54.275 "name": "BaseBdev3", 00:12:54.275 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:54.275 "is_configured": true, 00:12:54.275 "data_offset": 2048, 00:12:54.275 "data_size": 63488 00:12:54.275 }, 00:12:54.275 { 00:12:54.275 "name": "BaseBdev4", 00:12:54.275 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:54.275 "is_configured": true, 00:12:54.275 "data_offset": 2048, 00:12:54.275 "data_size": 63488 00:12:54.275 } 00:12:54.275 ] 00:12:54.275 }' 00:12:54.275 13:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.534 13:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.534 13:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.534 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.534 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:54.534 13:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.535 [2024-11-20 13:26:36.017721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.535 [2024-11-20 13:26:36.075251] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:54.535 [2024-11-20 13:26:36.075338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.535 [2024-11-20 13:26:36.075355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.535 [2024-11-20 13:26:36.075364] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.535 "name": "raid_bdev1", 00:12:54.535 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:54.535 "strip_size_kb": 0, 00:12:54.535 "state": "online", 00:12:54.535 "raid_level": "raid1", 00:12:54.535 "superblock": true, 00:12:54.535 "num_base_bdevs": 4, 00:12:54.535 "num_base_bdevs_discovered": 2, 00:12:54.535 "num_base_bdevs_operational": 2, 00:12:54.535 "base_bdevs_list": [ 00:12:54.535 { 00:12:54.535 "name": null, 00:12:54.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.535 "is_configured": false, 00:12:54.535 "data_offset": 0, 00:12:54.535 "data_size": 63488 00:12:54.535 }, 00:12:54.535 { 00:12:54.535 "name": null, 00:12:54.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.535 "is_configured": false, 00:12:54.535 "data_offset": 2048, 00:12:54.535 "data_size": 63488 00:12:54.535 }, 00:12:54.535 { 00:12:54.535 "name": "BaseBdev3", 00:12:54.535 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:54.535 "is_configured": true, 00:12:54.535 "data_offset": 2048, 00:12:54.535 "data_size": 63488 00:12:54.535 }, 00:12:54.535 { 00:12:54.535 "name": "BaseBdev4", 00:12:54.535 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:54.535 "is_configured": true, 00:12:54.535 "data_offset": 2048, 00:12:54.535 "data_size": 63488 00:12:54.535 } 00:12:54.535 ] 00:12:54.535 }' 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.535 13:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.104 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:55.104 13:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.104 13:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.104 [2024-11-20 13:26:36.547127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:55.104 [2024-11-20 13:26:36.547261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.104 [2024-11-20 13:26:36.547319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:12:55.104 [2024-11-20 13:26:36.547356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.104 [2024-11-20 13:26:36.547916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.104 [2024-11-20 13:26:36.548002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:55.104 [2024-11-20 13:26:36.548151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:55.104 [2024-11-20 13:26:36.548204] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:55.104 [2024-11-20 13:26:36.548253] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:55.104 [2024-11-20 13:26:36.548324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.104 [2024-11-20 13:26:36.552539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:12:55.104 spare 00:12:55.104 13:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.104 13:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:55.104 [2024-11-20 13:26:36.554745] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:56.060 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.060 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.060 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.060 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.060 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.060 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.060 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.060 13:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.060 13:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.060 13:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.060 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.060 "name": "raid_bdev1", 00:12:56.060 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:56.060 "strip_size_kb": 0, 00:12:56.060 "state": "online", 00:12:56.060 "raid_level": "raid1", 00:12:56.060 "superblock": true, 00:12:56.060 "num_base_bdevs": 4, 00:12:56.060 "num_base_bdevs_discovered": 3, 00:12:56.060 "num_base_bdevs_operational": 3, 00:12:56.060 "process": { 00:12:56.060 "type": "rebuild", 00:12:56.060 "target": "spare", 00:12:56.060 "progress": { 00:12:56.060 "blocks": 20480, 00:12:56.060 "percent": 32 00:12:56.060 } 00:12:56.060 }, 00:12:56.060 "base_bdevs_list": [ 00:12:56.060 { 00:12:56.060 "name": "spare", 00:12:56.060 "uuid": "0688b2ba-ec0b-5e26-9cea-687ac16b65db", 00:12:56.060 "is_configured": true, 00:12:56.060 "data_offset": 2048, 00:12:56.060 "data_size": 63488 00:12:56.060 }, 00:12:56.060 { 00:12:56.060 "name": null, 00:12:56.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.060 "is_configured": false, 00:12:56.060 "data_offset": 2048, 00:12:56.060 "data_size": 63488 00:12:56.060 }, 00:12:56.060 { 00:12:56.060 "name": "BaseBdev3", 00:12:56.060 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:56.060 "is_configured": true, 00:12:56.060 "data_offset": 2048, 00:12:56.060 "data_size": 63488 00:12:56.060 }, 00:12:56.060 { 00:12:56.060 "name": "BaseBdev4", 00:12:56.060 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:56.060 "is_configured": true, 00:12:56.060 "data_offset": 2048, 00:12:56.060 "data_size": 63488 00:12:56.060 } 00:12:56.060 ] 00:12:56.061 }' 00:12:56.061 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.061 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.061 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.061 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.061 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:56.061 13:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.061 13:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.061 [2024-11-20 13:26:37.686698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.319 [2024-11-20 13:26:37.760165] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.319 [2024-11-20 13:26:37.760352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.319 [2024-11-20 13:26:37.760400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.319 [2024-11-20 13:26:37.760425] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.319 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.319 "name": "raid_bdev1", 00:12:56.319 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:56.319 "strip_size_kb": 0, 00:12:56.319 "state": "online", 00:12:56.319 "raid_level": "raid1", 00:12:56.319 "superblock": true, 00:12:56.319 "num_base_bdevs": 4, 00:12:56.319 "num_base_bdevs_discovered": 2, 00:12:56.319 "num_base_bdevs_operational": 2, 00:12:56.319 "base_bdevs_list": [ 00:12:56.319 { 00:12:56.319 "name": null, 00:12:56.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.319 "is_configured": false, 00:12:56.319 "data_offset": 0, 00:12:56.319 "data_size": 63488 00:12:56.319 }, 00:12:56.319 { 00:12:56.319 "name": null, 00:12:56.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.319 "is_configured": false, 00:12:56.319 "data_offset": 2048, 00:12:56.320 "data_size": 63488 00:12:56.320 }, 00:12:56.320 { 00:12:56.320 "name": "BaseBdev3", 00:12:56.320 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:56.320 "is_configured": true, 00:12:56.320 "data_offset": 2048, 00:12:56.320 "data_size": 63488 00:12:56.320 }, 00:12:56.320 { 00:12:56.320 "name": "BaseBdev4", 00:12:56.320 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:56.320 "is_configured": true, 00:12:56.320 "data_offset": 2048, 00:12:56.320 "data_size": 63488 00:12:56.320 } 00:12:56.320 ] 00:12:56.320 }' 00:12:56.320 13:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.320 13:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.578 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.578 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.578 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.578 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.578 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.578 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.578 13:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.578 13:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.578 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.836 "name": "raid_bdev1", 00:12:56.836 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:56.836 "strip_size_kb": 0, 00:12:56.836 "state": "online", 00:12:56.836 "raid_level": "raid1", 00:12:56.836 "superblock": true, 00:12:56.836 "num_base_bdevs": 4, 00:12:56.836 "num_base_bdevs_discovered": 2, 00:12:56.836 "num_base_bdevs_operational": 2, 00:12:56.836 "base_bdevs_list": [ 00:12:56.836 { 00:12:56.836 "name": null, 00:12:56.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.836 "is_configured": false, 00:12:56.836 "data_offset": 0, 00:12:56.836 "data_size": 63488 00:12:56.836 }, 00:12:56.836 { 00:12:56.836 "name": null, 00:12:56.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.836 "is_configured": false, 00:12:56.836 "data_offset": 2048, 00:12:56.836 "data_size": 63488 00:12:56.836 }, 00:12:56.836 { 00:12:56.836 "name": "BaseBdev3", 00:12:56.836 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:56.836 "is_configured": true, 00:12:56.836 "data_offset": 2048, 00:12:56.836 "data_size": 63488 00:12:56.836 }, 00:12:56.836 { 00:12:56.836 "name": "BaseBdev4", 00:12:56.836 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:56.836 "is_configured": true, 00:12:56.836 "data_offset": 2048, 00:12:56.836 "data_size": 63488 00:12:56.836 } 00:12:56.836 ] 00:12:56.836 }' 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.836 [2024-11-20 13:26:38.380060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:56.836 [2024-11-20 13:26:38.380182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.836 [2024-11-20 13:26:38.380239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:56.836 [2024-11-20 13:26:38.380275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.836 [2024-11-20 13:26:38.380776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.836 [2024-11-20 13:26:38.380836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:56.836 [2024-11-20 13:26:38.380958] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:56.836 [2024-11-20 13:26:38.381014] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:56.836 [2024-11-20 13:26:38.381068] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:56.836 [2024-11-20 13:26:38.381081] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:56.836 BaseBdev1 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.836 13:26:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.776 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.776 "name": "raid_bdev1", 00:12:57.776 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:57.776 "strip_size_kb": 0, 00:12:57.776 "state": "online", 00:12:57.776 "raid_level": "raid1", 00:12:57.776 "superblock": true, 00:12:57.776 "num_base_bdevs": 4, 00:12:57.776 "num_base_bdevs_discovered": 2, 00:12:57.776 "num_base_bdevs_operational": 2, 00:12:57.776 "base_bdevs_list": [ 00:12:57.776 { 00:12:57.776 "name": null, 00:12:57.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.776 "is_configured": false, 00:12:57.776 "data_offset": 0, 00:12:57.776 "data_size": 63488 00:12:57.776 }, 00:12:57.776 { 00:12:57.776 "name": null, 00:12:57.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.776 "is_configured": false, 00:12:57.776 "data_offset": 2048, 00:12:57.776 "data_size": 63488 00:12:57.776 }, 00:12:57.776 { 00:12:57.776 "name": "BaseBdev3", 00:12:57.776 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:57.776 "is_configured": true, 00:12:57.776 "data_offset": 2048, 00:12:57.776 "data_size": 63488 00:12:57.776 }, 00:12:57.776 { 00:12:57.776 "name": "BaseBdev4", 00:12:57.776 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:57.776 "is_configured": true, 00:12:57.776 "data_offset": 2048, 00:12:57.776 "data_size": 63488 00:12:57.776 } 00:12:57.776 ] 00:12:57.776 }' 00:12:57.777 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.777 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.345 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.345 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.345 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.345 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.345 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.345 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.345 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.345 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.345 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.345 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.345 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.345 "name": "raid_bdev1", 00:12:58.345 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:58.345 "strip_size_kb": 0, 00:12:58.345 "state": "online", 00:12:58.345 "raid_level": "raid1", 00:12:58.345 "superblock": true, 00:12:58.345 "num_base_bdevs": 4, 00:12:58.345 "num_base_bdevs_discovered": 2, 00:12:58.345 "num_base_bdevs_operational": 2, 00:12:58.345 "base_bdevs_list": [ 00:12:58.345 { 00:12:58.345 "name": null, 00:12:58.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.345 "is_configured": false, 00:12:58.345 "data_offset": 0, 00:12:58.346 "data_size": 63488 00:12:58.346 }, 00:12:58.346 { 00:12:58.346 "name": null, 00:12:58.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.346 "is_configured": false, 00:12:58.346 "data_offset": 2048, 00:12:58.346 "data_size": 63488 00:12:58.346 }, 00:12:58.346 { 00:12:58.346 "name": "BaseBdev3", 00:12:58.346 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:58.346 "is_configured": true, 00:12:58.346 "data_offset": 2048, 00:12:58.346 "data_size": 63488 00:12:58.346 }, 00:12:58.346 { 00:12:58.346 "name": "BaseBdev4", 00:12:58.346 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:58.346 "is_configured": true, 00:12:58.346 "data_offset": 2048, 00:12:58.346 "data_size": 63488 00:12:58.346 } 00:12:58.346 ] 00:12:58.346 }' 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.346 [2024-11-20 13:26:39.973387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.346 [2024-11-20 13:26:39.973620] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:58.346 [2024-11-20 13:26:39.973687] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:58.346 request: 00:12:58.346 { 00:12:58.346 "base_bdev": "BaseBdev1", 00:12:58.346 "raid_bdev": "raid_bdev1", 00:12:58.346 "method": "bdev_raid_add_base_bdev", 00:12:58.346 "req_id": 1 00:12:58.346 } 00:12:58.346 Got JSON-RPC error response 00:12:58.346 response: 00:12:58.346 { 00:12:58.346 "code": -22, 00:12:58.346 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:58.346 } 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.346 13:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.724 13:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.724 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.724 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.724 "name": "raid_bdev1", 00:12:59.724 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:59.724 "strip_size_kb": 0, 00:12:59.724 "state": "online", 00:12:59.724 "raid_level": "raid1", 00:12:59.724 "superblock": true, 00:12:59.724 "num_base_bdevs": 4, 00:12:59.725 "num_base_bdevs_discovered": 2, 00:12:59.725 "num_base_bdevs_operational": 2, 00:12:59.725 "base_bdevs_list": [ 00:12:59.725 { 00:12:59.725 "name": null, 00:12:59.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.725 "is_configured": false, 00:12:59.725 "data_offset": 0, 00:12:59.725 "data_size": 63488 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "name": null, 00:12:59.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.725 "is_configured": false, 00:12:59.725 "data_offset": 2048, 00:12:59.725 "data_size": 63488 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "name": "BaseBdev3", 00:12:59.725 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:59.725 "is_configured": true, 00:12:59.725 "data_offset": 2048, 00:12:59.725 "data_size": 63488 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "name": "BaseBdev4", 00:12:59.725 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:59.725 "is_configured": true, 00:12:59.725 "data_offset": 2048, 00:12:59.725 "data_size": 63488 00:12:59.725 } 00:12:59.725 ] 00:12:59.725 }' 00:12:59.725 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.725 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.984 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.984 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.984 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.984 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.984 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.984 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.984 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.984 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.984 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.984 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.984 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.984 "name": "raid_bdev1", 00:12:59.984 "uuid": "264a2dc6-2153-43cf-b9af-c6382c6a20fe", 00:12:59.984 "strip_size_kb": 0, 00:12:59.984 "state": "online", 00:12:59.984 "raid_level": "raid1", 00:12:59.984 "superblock": true, 00:12:59.984 "num_base_bdevs": 4, 00:12:59.984 "num_base_bdevs_discovered": 2, 00:12:59.984 "num_base_bdevs_operational": 2, 00:12:59.984 "base_bdevs_list": [ 00:12:59.984 { 00:12:59.984 "name": null, 00:12:59.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.984 "is_configured": false, 00:12:59.984 "data_offset": 0, 00:12:59.984 "data_size": 63488 00:12:59.984 }, 00:12:59.984 { 00:12:59.984 "name": null, 00:12:59.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.984 "is_configured": false, 00:12:59.984 "data_offset": 2048, 00:12:59.984 "data_size": 63488 00:12:59.984 }, 00:12:59.984 { 00:12:59.984 "name": "BaseBdev3", 00:12:59.984 "uuid": "2622d319-5da4-517f-a8bb-ac8918f25b4c", 00:12:59.984 "is_configured": true, 00:12:59.984 "data_offset": 2048, 00:12:59.984 "data_size": 63488 00:12:59.984 }, 00:12:59.984 { 00:12:59.984 "name": "BaseBdev4", 00:12:59.984 "uuid": "5c976782-cfd9-5cf6-b22c-8cdf1d650ecb", 00:12:59.984 "is_configured": true, 00:12:59.984 "data_offset": 2048, 00:12:59.984 "data_size": 63488 00:12:59.984 } 00:12:59.985 ] 00:12:59.985 }' 00:12:59.985 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.985 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.985 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88327 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 88327 ']' 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 88327 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88327 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.244 killing process with pid 88327 00:13:00.244 Received shutdown signal, test time was about 60.000000 seconds 00:13:00.244 00:13:00.244 Latency(us) 00:13:00.244 [2024-11-20T13:26:41.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.244 [2024-11-20T13:26:41.912Z] =================================================================================================================== 00:13:00.244 [2024-11-20T13:26:41.912Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88327' 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 88327 00:13:00.244 [2024-11-20 13:26:41.695938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.244 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 88327 00:13:00.244 [2024-11-20 13:26:41.696094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.244 [2024-11-20 13:26:41.696171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.244 [2024-11-20 13:26:41.696188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:00.244 [2024-11-20 13:26:41.750255] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.503 13:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:00.503 00:13:00.503 real 0m25.300s 00:13:00.503 user 0m30.706s 00:13:00.503 sys 0m4.214s 00:13:00.503 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.503 13:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.503 ************************************ 00:13:00.503 END TEST raid_rebuild_test_sb 00:13:00.503 ************************************ 00:13:00.503 13:26:42 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:00.503 13:26:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:00.503 13:26:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.503 13:26:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.503 ************************************ 00:13:00.503 START TEST raid_rebuild_test_io 00:13:00.503 ************************************ 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89085 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89085 00:13:00.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 89085 ']' 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.503 13:26:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.503 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:00.503 Zero copy mechanism will not be used. 00:13:00.503 [2024-11-20 13:26:42.144317] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:00.503 [2024-11-20 13:26:42.144479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89085 ] 00:13:00.762 [2024-11-20 13:26:42.297514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.762 [2024-11-20 13:26:42.328206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.762 [2024-11-20 13:26:42.373471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.762 [2024-11-20 13:26:42.373507] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.699 BaseBdev1_malloc 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.699 [2024-11-20 13:26:43.061533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:01.699 [2024-11-20 13:26:43.061645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.699 [2024-11-20 13:26:43.061690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:01.699 [2024-11-20 13:26:43.061722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.699 [2024-11-20 13:26:43.064021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.699 [2024-11-20 13:26:43.064062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.699 BaseBdev1 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.699 BaseBdev2_malloc 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.699 [2024-11-20 13:26:43.090682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:01.699 [2024-11-20 13:26:43.090804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.699 [2024-11-20 13:26:43.090849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:01.699 [2024-11-20 13:26:43.090891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.699 [2024-11-20 13:26:43.093265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.699 [2024-11-20 13:26:43.093349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:01.699 BaseBdev2 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.699 BaseBdev3_malloc 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.699 [2024-11-20 13:26:43.120040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:01.699 [2024-11-20 13:26:43.120181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.699 [2024-11-20 13:26:43.120240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:01.699 [2024-11-20 13:26:43.120254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.699 [2024-11-20 13:26:43.122855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.699 [2024-11-20 13:26:43.122900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:01.699 BaseBdev3 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.699 BaseBdev4_malloc 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.699 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.700 [2024-11-20 13:26:43.159020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:01.700 [2024-11-20 13:26:43.159146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.700 [2024-11-20 13:26:43.159209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:01.700 [2024-11-20 13:26:43.159248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.700 [2024-11-20 13:26:43.161738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.700 [2024-11-20 13:26:43.161826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:01.700 BaseBdev4 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.700 spare_malloc 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.700 spare_delay 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.700 [2024-11-20 13:26:43.200723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.700 [2024-11-20 13:26:43.200836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.700 [2024-11-20 13:26:43.200879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:01.700 [2024-11-20 13:26:43.200908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.700 [2024-11-20 13:26:43.203249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.700 [2024-11-20 13:26:43.203322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:01.700 spare 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.700 [2024-11-20 13:26:43.212781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.700 [2024-11-20 13:26:43.214832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.700 [2024-11-20 13:26:43.214942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.700 [2024-11-20 13:26:43.215025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:01.700 [2024-11-20 13:26:43.215134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:01.700 [2024-11-20 13:26:43.215175] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:01.700 [2024-11-20 13:26:43.215458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:01.700 [2024-11-20 13:26:43.215669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:01.700 [2024-11-20 13:26:43.215726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:01.700 [2024-11-20 13:26:43.215923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.700 "name": "raid_bdev1", 00:13:01.700 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:01.700 "strip_size_kb": 0, 00:13:01.700 "state": "online", 00:13:01.700 "raid_level": "raid1", 00:13:01.700 "superblock": false, 00:13:01.700 "num_base_bdevs": 4, 00:13:01.700 "num_base_bdevs_discovered": 4, 00:13:01.700 "num_base_bdevs_operational": 4, 00:13:01.700 "base_bdevs_list": [ 00:13:01.700 { 00:13:01.700 "name": "BaseBdev1", 00:13:01.700 "uuid": "bd34a9e7-a0be-55f9-8cb1-f761d7b9032b", 00:13:01.700 "is_configured": true, 00:13:01.700 "data_offset": 0, 00:13:01.700 "data_size": 65536 00:13:01.700 }, 00:13:01.700 { 00:13:01.700 "name": "BaseBdev2", 00:13:01.700 "uuid": "d9375a01-a7da-5f60-b0ab-8114753f44b6", 00:13:01.700 "is_configured": true, 00:13:01.700 "data_offset": 0, 00:13:01.700 "data_size": 65536 00:13:01.700 }, 00:13:01.700 { 00:13:01.700 "name": "BaseBdev3", 00:13:01.700 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:01.700 "is_configured": true, 00:13:01.700 "data_offset": 0, 00:13:01.700 "data_size": 65536 00:13:01.700 }, 00:13:01.700 { 00:13:01.700 "name": "BaseBdev4", 00:13:01.700 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:01.700 "is_configured": true, 00:13:01.700 "data_offset": 0, 00:13:01.700 "data_size": 65536 00:13:01.700 } 00:13:01.700 ] 00:13:01.700 }' 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.700 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 [2024-11-20 13:26:43.672397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 [2024-11-20 13:26:43.775815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.270 "name": "raid_bdev1", 00:13:02.270 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:02.270 "strip_size_kb": 0, 00:13:02.270 "state": "online", 00:13:02.270 "raid_level": "raid1", 00:13:02.270 "superblock": false, 00:13:02.270 "num_base_bdevs": 4, 00:13:02.270 "num_base_bdevs_discovered": 3, 00:13:02.270 "num_base_bdevs_operational": 3, 00:13:02.270 "base_bdevs_list": [ 00:13:02.270 { 00:13:02.270 "name": null, 00:13:02.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.270 "is_configured": false, 00:13:02.270 "data_offset": 0, 00:13:02.270 "data_size": 65536 00:13:02.270 }, 00:13:02.270 { 00:13:02.270 "name": "BaseBdev2", 00:13:02.270 "uuid": "d9375a01-a7da-5f60-b0ab-8114753f44b6", 00:13:02.270 "is_configured": true, 00:13:02.270 "data_offset": 0, 00:13:02.270 "data_size": 65536 00:13:02.270 }, 00:13:02.270 { 00:13:02.270 "name": "BaseBdev3", 00:13:02.270 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:02.270 "is_configured": true, 00:13:02.270 "data_offset": 0, 00:13:02.270 "data_size": 65536 00:13:02.270 }, 00:13:02.270 { 00:13:02.270 "name": "BaseBdev4", 00:13:02.270 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:02.270 "is_configured": true, 00:13:02.270 "data_offset": 0, 00:13:02.270 "data_size": 65536 00:13:02.270 } 00:13:02.270 ] 00:13:02.270 }' 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.270 13:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 [2024-11-20 13:26:43.865860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:02.270 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:02.270 Zero copy mechanism will not be used. 00:13:02.270 Running I/O for 60 seconds... 00:13:02.837 13:26:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:02.837 13:26:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.837 13:26:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.838 [2024-11-20 13:26:44.249376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.838 13:26:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.838 13:26:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:02.838 [2024-11-20 13:26:44.307010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:13:02.838 [2024-11-20 13:26:44.309492] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.838 [2024-11-20 13:26:44.426239] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:02.838 [2024-11-20 13:26:44.427696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:03.097 [2024-11-20 13:26:44.640548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:03.097 [2024-11-20 13:26:44.640979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:03.356 147.00 IOPS, 441.00 MiB/s [2024-11-20T13:26:45.024Z] [2024-11-20 13:26:44.885018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:03.356 [2024-11-20 13:26:44.885610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:03.615 [2024-11-20 13:26:45.095375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:03.615 [2024-11-20 13:26:45.095728] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.874 "name": "raid_bdev1", 00:13:03.874 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:03.874 "strip_size_kb": 0, 00:13:03.874 "state": "online", 00:13:03.874 "raid_level": "raid1", 00:13:03.874 "superblock": false, 00:13:03.874 "num_base_bdevs": 4, 00:13:03.874 "num_base_bdevs_discovered": 4, 00:13:03.874 "num_base_bdevs_operational": 4, 00:13:03.874 "process": { 00:13:03.874 "type": "rebuild", 00:13:03.874 "target": "spare", 00:13:03.874 "progress": { 00:13:03.874 "blocks": 10240, 00:13:03.874 "percent": 15 00:13:03.874 } 00:13:03.874 }, 00:13:03.874 "base_bdevs_list": [ 00:13:03.874 { 00:13:03.874 "name": "spare", 00:13:03.874 "uuid": "7442670a-f59d-50c5-8ec8-2d15527ff6af", 00:13:03.874 "is_configured": true, 00:13:03.874 "data_offset": 0, 00:13:03.874 "data_size": 65536 00:13:03.874 }, 00:13:03.874 { 00:13:03.874 "name": "BaseBdev2", 00:13:03.874 "uuid": "d9375a01-a7da-5f60-b0ab-8114753f44b6", 00:13:03.874 "is_configured": true, 00:13:03.874 "data_offset": 0, 00:13:03.874 "data_size": 65536 00:13:03.874 }, 00:13:03.874 { 00:13:03.874 "name": "BaseBdev3", 00:13:03.874 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:03.874 "is_configured": true, 00:13:03.874 "data_offset": 0, 00:13:03.874 "data_size": 65536 00:13:03.874 }, 00:13:03.874 { 00:13:03.874 "name": "BaseBdev4", 00:13:03.874 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:03.874 "is_configured": true, 00:13:03.874 "data_offset": 0, 00:13:03.874 "data_size": 65536 00:13:03.874 } 00:13:03.874 ] 00:13:03.874 }' 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.874 13:26:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.874 [2024-11-20 13:26:45.445811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.874 [2024-11-20 13:26:45.455113] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:04.134 [2024-11-20 13:26:45.563587] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:04.134 [2024-11-20 13:26:45.581941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.134 [2024-11-20 13:26:45.582016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.134 [2024-11-20 13:26:45.582036] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:04.134 [2024-11-20 13:26:45.594732] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.134 "name": "raid_bdev1", 00:13:04.134 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:04.134 "strip_size_kb": 0, 00:13:04.134 "state": "online", 00:13:04.134 "raid_level": "raid1", 00:13:04.134 "superblock": false, 00:13:04.134 "num_base_bdevs": 4, 00:13:04.134 "num_base_bdevs_discovered": 3, 00:13:04.134 "num_base_bdevs_operational": 3, 00:13:04.134 "base_bdevs_list": [ 00:13:04.134 { 00:13:04.134 "name": null, 00:13:04.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.134 "is_configured": false, 00:13:04.134 "data_offset": 0, 00:13:04.134 "data_size": 65536 00:13:04.134 }, 00:13:04.134 { 00:13:04.134 "name": "BaseBdev2", 00:13:04.134 "uuid": "d9375a01-a7da-5f60-b0ab-8114753f44b6", 00:13:04.134 "is_configured": true, 00:13:04.134 "data_offset": 0, 00:13:04.134 "data_size": 65536 00:13:04.134 }, 00:13:04.134 { 00:13:04.134 "name": "BaseBdev3", 00:13:04.134 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:04.134 "is_configured": true, 00:13:04.134 "data_offset": 0, 00:13:04.134 "data_size": 65536 00:13:04.134 }, 00:13:04.134 { 00:13:04.134 "name": "BaseBdev4", 00:13:04.134 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:04.134 "is_configured": true, 00:13:04.134 "data_offset": 0, 00:13:04.134 "data_size": 65536 00:13:04.134 } 00:13:04.134 ] 00:13:04.134 }' 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.134 13:26:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.653 121.00 IOPS, 363.00 MiB/s [2024-11-20T13:26:46.321Z] 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.653 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.653 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.653 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.653 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.653 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.653 13:26:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.653 13:26:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.653 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.653 13:26:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.653 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.653 "name": "raid_bdev1", 00:13:04.653 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:04.654 "strip_size_kb": 0, 00:13:04.654 "state": "online", 00:13:04.654 "raid_level": "raid1", 00:13:04.654 "superblock": false, 00:13:04.654 "num_base_bdevs": 4, 00:13:04.654 "num_base_bdevs_discovered": 3, 00:13:04.654 "num_base_bdevs_operational": 3, 00:13:04.654 "base_bdevs_list": [ 00:13:04.654 { 00:13:04.654 "name": null, 00:13:04.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.654 "is_configured": false, 00:13:04.654 "data_offset": 0, 00:13:04.654 "data_size": 65536 00:13:04.654 }, 00:13:04.654 { 00:13:04.654 "name": "BaseBdev2", 00:13:04.654 "uuid": "d9375a01-a7da-5f60-b0ab-8114753f44b6", 00:13:04.654 "is_configured": true, 00:13:04.654 "data_offset": 0, 00:13:04.654 "data_size": 65536 00:13:04.654 }, 00:13:04.654 { 00:13:04.654 "name": "BaseBdev3", 00:13:04.654 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:04.654 "is_configured": true, 00:13:04.654 "data_offset": 0, 00:13:04.654 "data_size": 65536 00:13:04.654 }, 00:13:04.654 { 00:13:04.654 "name": "BaseBdev4", 00:13:04.654 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:04.654 "is_configured": true, 00:13:04.654 "data_offset": 0, 00:13:04.654 "data_size": 65536 00:13:04.654 } 00:13:04.654 ] 00:13:04.654 }' 00:13:04.654 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.654 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.654 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.654 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.654 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:04.654 13:26:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.654 13:26:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.654 [2024-11-20 13:26:46.193230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.654 13:26:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.654 13:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:04.654 [2024-11-20 13:26:46.276642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:04.654 [2024-11-20 13:26:46.279402] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.913 [2024-11-20 13:26:46.418381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:04.913 [2024-11-20 13:26:46.419100] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:05.173 [2024-11-20 13:26:46.640862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:05.173 [2024-11-20 13:26:46.641727] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:05.432 135.67 IOPS, 407.00 MiB/s [2024-11-20T13:26:47.101Z] [2024-11-20 13:26:46.984045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:05.691 [2024-11-20 13:26:47.201134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:05.691 [2024-11-20 13:26:47.201466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:05.691 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.691 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.691 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.691 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.691 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.691 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.691 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.691 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.691 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.691 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.691 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.691 "name": "raid_bdev1", 00:13:05.691 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:05.691 "strip_size_kb": 0, 00:13:05.691 "state": "online", 00:13:05.691 "raid_level": "raid1", 00:13:05.691 "superblock": false, 00:13:05.692 "num_base_bdevs": 4, 00:13:05.692 "num_base_bdevs_discovered": 4, 00:13:05.692 "num_base_bdevs_operational": 4, 00:13:05.692 "process": { 00:13:05.692 "type": "rebuild", 00:13:05.692 "target": "spare", 00:13:05.692 "progress": { 00:13:05.692 "blocks": 10240, 00:13:05.692 "percent": 15 00:13:05.692 } 00:13:05.692 }, 00:13:05.692 "base_bdevs_list": [ 00:13:05.692 { 00:13:05.692 "name": "spare", 00:13:05.692 "uuid": "7442670a-f59d-50c5-8ec8-2d15527ff6af", 00:13:05.692 "is_configured": true, 00:13:05.692 "data_offset": 0, 00:13:05.692 "data_size": 65536 00:13:05.692 }, 00:13:05.692 { 00:13:05.692 "name": "BaseBdev2", 00:13:05.692 "uuid": "d9375a01-a7da-5f60-b0ab-8114753f44b6", 00:13:05.692 "is_configured": true, 00:13:05.692 "data_offset": 0, 00:13:05.692 "data_size": 65536 00:13:05.692 }, 00:13:05.692 { 00:13:05.692 "name": "BaseBdev3", 00:13:05.692 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:05.692 "is_configured": true, 00:13:05.692 "data_offset": 0, 00:13:05.692 "data_size": 65536 00:13:05.692 }, 00:13:05.692 { 00:13:05.692 "name": "BaseBdev4", 00:13:05.692 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:05.692 "is_configured": true, 00:13:05.692 "data_offset": 0, 00:13:05.692 "data_size": 65536 00:13:05.692 } 00:13:05.692 ] 00:13:05.692 }' 00:13:05.692 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.692 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.692 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.950 [2024-11-20 13:26:47.387332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:05.950 [2024-11-20 13:26:47.458298] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:13:05.950 [2024-11-20 13:26:47.458455] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.950 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.950 "name": "raid_bdev1", 00:13:05.950 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:05.950 "strip_size_kb": 0, 00:13:05.950 "state": "online", 00:13:05.950 "raid_level": "raid1", 00:13:05.950 "superblock": false, 00:13:05.950 "num_base_bdevs": 4, 00:13:05.950 "num_base_bdevs_discovered": 3, 00:13:05.950 "num_base_bdevs_operational": 3, 00:13:05.950 "process": { 00:13:05.950 "type": "rebuild", 00:13:05.950 "target": "spare", 00:13:05.950 "progress": { 00:13:05.950 "blocks": 12288, 00:13:05.950 "percent": 18 00:13:05.950 } 00:13:05.950 }, 00:13:05.950 "base_bdevs_list": [ 00:13:05.950 { 00:13:05.950 "name": "spare", 00:13:05.950 "uuid": "7442670a-f59d-50c5-8ec8-2d15527ff6af", 00:13:05.950 "is_configured": true, 00:13:05.950 "data_offset": 0, 00:13:05.950 "data_size": 65536 00:13:05.950 }, 00:13:05.950 { 00:13:05.950 "name": null, 00:13:05.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.950 "is_configured": false, 00:13:05.950 "data_offset": 0, 00:13:05.950 "data_size": 65536 00:13:05.950 }, 00:13:05.951 { 00:13:05.951 "name": "BaseBdev3", 00:13:05.951 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:05.951 "is_configured": true, 00:13:05.951 "data_offset": 0, 00:13:05.951 "data_size": 65536 00:13:05.951 }, 00:13:05.951 { 00:13:05.951 "name": "BaseBdev4", 00:13:05.951 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:05.951 "is_configured": true, 00:13:05.951 "data_offset": 0, 00:13:05.951 "data_size": 65536 00:13:05.951 } 00:13:05.951 ] 00:13:05.951 }' 00:13:05.951 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.951 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.951 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.951 [2024-11-20 13:26:47.616146] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=396 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.210 "name": "raid_bdev1", 00:13:06.210 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:06.210 "strip_size_kb": 0, 00:13:06.210 "state": "online", 00:13:06.210 "raid_level": "raid1", 00:13:06.210 "superblock": false, 00:13:06.210 "num_base_bdevs": 4, 00:13:06.210 "num_base_bdevs_discovered": 3, 00:13:06.210 "num_base_bdevs_operational": 3, 00:13:06.210 "process": { 00:13:06.210 "type": "rebuild", 00:13:06.210 "target": "spare", 00:13:06.210 "progress": { 00:13:06.210 "blocks": 14336, 00:13:06.210 "percent": 21 00:13:06.210 } 00:13:06.210 }, 00:13:06.210 "base_bdevs_list": [ 00:13:06.210 { 00:13:06.210 "name": "spare", 00:13:06.210 "uuid": "7442670a-f59d-50c5-8ec8-2d15527ff6af", 00:13:06.210 "is_configured": true, 00:13:06.210 "data_offset": 0, 00:13:06.210 "data_size": 65536 00:13:06.210 }, 00:13:06.210 { 00:13:06.210 "name": null, 00:13:06.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.210 "is_configured": false, 00:13:06.210 "data_offset": 0, 00:13:06.210 "data_size": 65536 00:13:06.210 }, 00:13:06.210 { 00:13:06.210 "name": "BaseBdev3", 00:13:06.210 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:06.210 "is_configured": true, 00:13:06.210 "data_offset": 0, 00:13:06.210 "data_size": 65536 00:13:06.210 }, 00:13:06.210 { 00:13:06.210 "name": "BaseBdev4", 00:13:06.210 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:06.210 "is_configured": true, 00:13:06.210 "data_offset": 0, 00:13:06.210 "data_size": 65536 00:13:06.210 } 00:13:06.210 ] 00:13:06.210 }' 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.210 13:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:06.210 [2024-11-20 13:26:47.834986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:06.210 [2024-11-20 13:26:47.835666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:06.727 116.50 IOPS, 349.50 MiB/s [2024-11-20T13:26:48.395Z] [2024-11-20 13:26:48.158918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:06.727 [2024-11-20 13:26:48.160263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:06.727 [2024-11-20 13:26:48.281568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:06.986 [2024-11-20 13:26:48.604249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.244 "name": "raid_bdev1", 00:13:07.244 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:07.244 "strip_size_kb": 0, 00:13:07.244 "state": "online", 00:13:07.244 "raid_level": "raid1", 00:13:07.244 "superblock": false, 00:13:07.244 "num_base_bdevs": 4, 00:13:07.244 "num_base_bdevs_discovered": 3, 00:13:07.244 "num_base_bdevs_operational": 3, 00:13:07.244 "process": { 00:13:07.244 "type": "rebuild", 00:13:07.244 "target": "spare", 00:13:07.244 "progress": { 00:13:07.244 "blocks": 30720, 00:13:07.244 "percent": 46 00:13:07.244 } 00:13:07.244 }, 00:13:07.244 "base_bdevs_list": [ 00:13:07.244 { 00:13:07.244 "name": "spare", 00:13:07.244 "uuid": "7442670a-f59d-50c5-8ec8-2d15527ff6af", 00:13:07.244 "is_configured": true, 00:13:07.244 "data_offset": 0, 00:13:07.244 "data_size": 65536 00:13:07.244 }, 00:13:07.244 { 00:13:07.244 "name": null, 00:13:07.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.244 "is_configured": false, 00:13:07.244 "data_offset": 0, 00:13:07.244 "data_size": 65536 00:13:07.244 }, 00:13:07.244 { 00:13:07.244 "name": "BaseBdev3", 00:13:07.244 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:07.244 "is_configured": true, 00:13:07.244 "data_offset": 0, 00:13:07.244 "data_size": 65536 00:13:07.244 }, 00:13:07.244 { 00:13:07.244 "name": "BaseBdev4", 00:13:07.244 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:07.244 "is_configured": true, 00:13:07.244 "data_offset": 0, 00:13:07.244 "data_size": 65536 00:13:07.244 } 00:13:07.244 ] 00:13:07.244 }' 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.244 103.60 IOPS, 310.80 MiB/s [2024-11-20T13:26:48.912Z] 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.244 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.502 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.502 13:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.502 [2024-11-20 13:26:48.947485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:07.760 [2024-11-20 13:26:49.310414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:08.017 [2024-11-20 13:26:49.516373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:08.017 [2024-11-20 13:26:49.635424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:08.535 93.33 IOPS, 280.00 MiB/s [2024-11-20T13:26:50.203Z] 13:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.535 13:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.535 13:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.535 13:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.535 13:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.535 13:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.535 13:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.535 13:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.535 13:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.535 13:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.535 13:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.535 13:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.535 "name": "raid_bdev1", 00:13:08.535 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:08.535 "strip_size_kb": 0, 00:13:08.535 "state": "online", 00:13:08.535 "raid_level": "raid1", 00:13:08.535 "superblock": false, 00:13:08.535 "num_base_bdevs": 4, 00:13:08.535 "num_base_bdevs_discovered": 3, 00:13:08.535 "num_base_bdevs_operational": 3, 00:13:08.535 "process": { 00:13:08.535 "type": "rebuild", 00:13:08.535 "target": "spare", 00:13:08.535 "progress": { 00:13:08.535 "blocks": 49152, 00:13:08.535 "percent": 75 00:13:08.535 } 00:13:08.535 }, 00:13:08.535 "base_bdevs_list": [ 00:13:08.535 { 00:13:08.535 "name": "spare", 00:13:08.535 "uuid": "7442670a-f59d-50c5-8ec8-2d15527ff6af", 00:13:08.535 "is_configured": true, 00:13:08.535 "data_offset": 0, 00:13:08.535 "data_size": 65536 00:13:08.535 }, 00:13:08.535 { 00:13:08.535 "name": null, 00:13:08.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.535 "is_configured": false, 00:13:08.535 "data_offset": 0, 00:13:08.535 "data_size": 65536 00:13:08.535 }, 00:13:08.535 { 00:13:08.535 "name": "BaseBdev3", 00:13:08.535 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:08.535 "is_configured": true, 00:13:08.535 "data_offset": 0, 00:13:08.535 "data_size": 65536 00:13:08.535 }, 00:13:08.535 { 00:13:08.535 "name": "BaseBdev4", 00:13:08.535 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:08.535 "is_configured": true, 00:13:08.535 "data_offset": 0, 00:13:08.535 "data_size": 65536 00:13:08.535 } 00:13:08.536 ] 00:13:08.536 }' 00:13:08.536 13:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.536 13:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.536 13:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.536 13:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.536 13:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:08.817 [2024-11-20 13:26:50.398645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:09.383 [2024-11-20 13:26:50.853316] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:09.383 85.00 IOPS, 255.00 MiB/s [2024-11-20T13:26:51.051Z] [2024-11-20 13:26:50.960325] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:09.383 [2024-11-20 13:26:50.963223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.643 "name": "raid_bdev1", 00:13:09.643 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:09.643 "strip_size_kb": 0, 00:13:09.643 "state": "online", 00:13:09.643 "raid_level": "raid1", 00:13:09.643 "superblock": false, 00:13:09.643 "num_base_bdevs": 4, 00:13:09.643 "num_base_bdevs_discovered": 3, 00:13:09.643 "num_base_bdevs_operational": 3, 00:13:09.643 "base_bdevs_list": [ 00:13:09.643 { 00:13:09.643 "name": "spare", 00:13:09.643 "uuid": "7442670a-f59d-50c5-8ec8-2d15527ff6af", 00:13:09.643 "is_configured": true, 00:13:09.643 "data_offset": 0, 00:13:09.643 "data_size": 65536 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "name": null, 00:13:09.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.643 "is_configured": false, 00:13:09.643 "data_offset": 0, 00:13:09.643 "data_size": 65536 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "name": "BaseBdev3", 00:13:09.643 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:09.643 "is_configured": true, 00:13:09.643 "data_offset": 0, 00:13:09.643 "data_size": 65536 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "name": "BaseBdev4", 00:13:09.643 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:09.643 "is_configured": true, 00:13:09.643 "data_offset": 0, 00:13:09.643 "data_size": 65536 00:13:09.643 } 00:13:09.643 ] 00:13:09.643 }' 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.643 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.643 "name": "raid_bdev1", 00:13:09.643 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:09.643 "strip_size_kb": 0, 00:13:09.643 "state": "online", 00:13:09.643 "raid_level": "raid1", 00:13:09.643 "superblock": false, 00:13:09.643 "num_base_bdevs": 4, 00:13:09.643 "num_base_bdevs_discovered": 3, 00:13:09.643 "num_base_bdevs_operational": 3, 00:13:09.643 "base_bdevs_list": [ 00:13:09.643 { 00:13:09.643 "name": "spare", 00:13:09.643 "uuid": "7442670a-f59d-50c5-8ec8-2d15527ff6af", 00:13:09.643 "is_configured": true, 00:13:09.643 "data_offset": 0, 00:13:09.643 "data_size": 65536 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "name": null, 00:13:09.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.643 "is_configured": false, 00:13:09.643 "data_offset": 0, 00:13:09.643 "data_size": 65536 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "name": "BaseBdev3", 00:13:09.643 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:09.643 "is_configured": true, 00:13:09.643 "data_offset": 0, 00:13:09.643 "data_size": 65536 00:13:09.643 }, 00:13:09.643 { 00:13:09.643 "name": "BaseBdev4", 00:13:09.643 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:09.643 "is_configured": true, 00:13:09.643 "data_offset": 0, 00:13:09.643 "data_size": 65536 00:13:09.643 } 00:13:09.643 ] 00:13:09.643 }' 00:13:09.644 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.902 "name": "raid_bdev1", 00:13:09.902 "uuid": "a349cb07-9993-48a4-a78d-cae23b7c0f44", 00:13:09.902 "strip_size_kb": 0, 00:13:09.902 "state": "online", 00:13:09.902 "raid_level": "raid1", 00:13:09.902 "superblock": false, 00:13:09.902 "num_base_bdevs": 4, 00:13:09.902 "num_base_bdevs_discovered": 3, 00:13:09.902 "num_base_bdevs_operational": 3, 00:13:09.902 "base_bdevs_list": [ 00:13:09.902 { 00:13:09.902 "name": "spare", 00:13:09.902 "uuid": "7442670a-f59d-50c5-8ec8-2d15527ff6af", 00:13:09.902 "is_configured": true, 00:13:09.902 "data_offset": 0, 00:13:09.902 "data_size": 65536 00:13:09.902 }, 00:13:09.902 { 00:13:09.902 "name": null, 00:13:09.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.902 "is_configured": false, 00:13:09.902 "data_offset": 0, 00:13:09.902 "data_size": 65536 00:13:09.902 }, 00:13:09.902 { 00:13:09.902 "name": "BaseBdev3", 00:13:09.902 "uuid": "79298ae7-1683-5461-80e3-2a0e62b6f701", 00:13:09.902 "is_configured": true, 00:13:09.902 "data_offset": 0, 00:13:09.902 "data_size": 65536 00:13:09.902 }, 00:13:09.902 { 00:13:09.902 "name": "BaseBdev4", 00:13:09.902 "uuid": "d7a8f8d0-1927-5673-984f-73133013a99a", 00:13:09.902 "is_configured": true, 00:13:09.902 "data_offset": 0, 00:13:09.902 "data_size": 65536 00:13:09.902 } 00:13:09.902 ] 00:13:09.902 }' 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.902 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.161 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:10.161 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.161 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.161 [2024-11-20 13:26:51.779411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.161 [2024-11-20 13:26:51.779471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.419 00:13:10.419 Latency(us) 00:13:10.419 [2024-11-20T13:26:52.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.419 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:10.419 raid_bdev1 : 8.00 78.89 236.68 0.00 0.00 16196.75 321.96 119052.30 00:13:10.419 [2024-11-20T13:26:52.087Z] =================================================================================================================== 00:13:10.419 [2024-11-20T13:26:52.087Z] Total : 78.89 236.68 0.00 0.00 16196.75 321.96 119052.30 00:13:10.419 [2024-11-20 13:26:51.856658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.419 [2024-11-20 13:26:51.856731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.419 [2024-11-20 13:26:51.856871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.419 [2024-11-20 13:26:51.856890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:10.419 { 00:13:10.419 "results": [ 00:13:10.419 { 00:13:10.420 "job": "raid_bdev1", 00:13:10.420 "core_mask": "0x1", 00:13:10.420 "workload": "randrw", 00:13:10.420 "percentage": 50, 00:13:10.420 "status": "finished", 00:13:10.420 "queue_depth": 2, 00:13:10.420 "io_size": 3145728, 00:13:10.420 "runtime": 7.99811, 00:13:10.420 "iops": 78.89363862212447, 00:13:10.420 "mibps": 236.68091586637343, 00:13:10.420 "io_failed": 0, 00:13:10.420 "io_timeout": 0, 00:13:10.420 "avg_latency_us": 16196.751977522335, 00:13:10.420 "min_latency_us": 321.95633187772927, 00:13:10.420 "max_latency_us": 119052.29694323144 00:13:10.420 } 00:13:10.420 ], 00:13:10.420 "core_count": 1 00:13:10.420 } 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.420 13:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:10.678 /dev/nbd0 00:13:10.678 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:10.678 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:10.678 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:10.678 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:10.678 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.678 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.678 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:10.678 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.679 1+0 records in 00:13:10.679 1+0 records out 00:13:10.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437011 s, 9.4 MB/s 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.679 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:10.937 /dev/nbd1 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.937 1+0 records in 00:13:10.937 1+0 records out 00:13:10.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333737 s, 12.3 MB/s 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.937 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.507 13:26:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:11.507 /dev/nbd1 00:13:11.507 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:11.507 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:11.507 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:11.507 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:11.507 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.507 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.507 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.765 1+0 records in 00:13:11.765 1+0 records out 00:13:11.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353012 s, 11.6 MB/s 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.765 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.024 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89085 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 89085 ']' 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 89085 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89085 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.282 killing process with pid 89085 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89085' 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 89085 00:13:12.282 Received shutdown signal, test time was about 9.938927 seconds 00:13:12.282 00:13:12.282 Latency(us) 00:13:12.282 [2024-11-20T13:26:53.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.282 [2024-11-20T13:26:53.950Z] =================================================================================================================== 00:13:12.282 [2024-11-20T13:26:53.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.282 13:26:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 89085 00:13:12.282 [2024-11-20 13:26:53.788239] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.282 [2024-11-20 13:26:53.837823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:12.541 00:13:12.541 real 0m12.013s 00:13:12.541 user 0m15.864s 00:13:12.541 sys 0m1.763s 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.541 ************************************ 00:13:12.541 END TEST raid_rebuild_test_io 00:13:12.541 ************************************ 00:13:12.541 13:26:54 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:12.541 13:26:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:12.541 13:26:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.541 13:26:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:12.541 ************************************ 00:13:12.541 START TEST raid_rebuild_test_sb_io 00:13:12.541 ************************************ 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:12.541 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:12.542 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:12.542 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89483 00:13:12.542 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89483 00:13:12.542 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 89483 ']' 00:13:12.542 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.542 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.542 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.542 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:12.542 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.542 13:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.800 [2024-11-20 13:26:54.212970] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:12.800 [2024-11-20 13:26:54.213205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89483 ] 00:13:12.800 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:12.800 Zero copy mechanism will not be used. 00:13:12.800 [2024-11-20 13:26:54.360584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.800 [2024-11-20 13:26:54.393072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.800 [2024-11-20 13:26:54.441812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.800 [2024-11-20 13:26:54.441872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.739 BaseBdev1_malloc 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.739 [2024-11-20 13:26:55.227368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:13.739 [2024-11-20 13:26:55.227479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.739 [2024-11-20 13:26:55.227531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:13.739 [2024-11-20 13:26:55.227570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.739 [2024-11-20 13:26:55.230473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.739 [2024-11-20 13:26:55.230565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:13.739 BaseBdev1 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.739 BaseBdev2_malloc 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.739 [2024-11-20 13:26:55.252020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:13.739 [2024-11-20 13:26:55.252102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.739 [2024-11-20 13:26:55.252132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:13.739 [2024-11-20 13:26:55.252143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.739 [2024-11-20 13:26:55.254851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.739 [2024-11-20 13:26:55.254923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:13.739 BaseBdev2 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.739 BaseBdev3_malloc 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.739 [2024-11-20 13:26:55.273790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:13.739 [2024-11-20 13:26:55.273883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.739 [2024-11-20 13:26:55.273915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:13.739 [2024-11-20 13:26:55.273927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.739 [2024-11-20 13:26:55.276635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.739 [2024-11-20 13:26:55.276697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:13.739 BaseBdev3 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.739 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.740 BaseBdev4_malloc 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.740 [2024-11-20 13:26:55.306726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:13.740 [2024-11-20 13:26:55.306813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.740 [2024-11-20 13:26:55.306848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:13.740 [2024-11-20 13:26:55.306862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.740 [2024-11-20 13:26:55.309985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.740 [2024-11-20 13:26:55.310068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:13.740 BaseBdev4 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.740 spare_malloc 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.740 spare_delay 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.740 [2024-11-20 13:26:55.336530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:13.740 [2024-11-20 13:26:55.336621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.740 [2024-11-20 13:26:55.336649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:13.740 [2024-11-20 13:26:55.336662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.740 [2024-11-20 13:26:55.339318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.740 [2024-11-20 13:26:55.339386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:13.740 spare 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.740 [2024-11-20 13:26:55.344656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.740 [2024-11-20 13:26:55.346951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:13.740 [2024-11-20 13:26:55.347064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:13.740 [2024-11-20 13:26:55.347127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:13.740 [2024-11-20 13:26:55.347351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:13.740 [2024-11-20 13:26:55.347374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:13.740 [2024-11-20 13:26:55.347743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:13.740 [2024-11-20 13:26:55.347939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:13.740 [2024-11-20 13:26:55.347963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:13.740 [2024-11-20 13:26:55.348155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.740 "name": "raid_bdev1", 00:13:13.740 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:13.740 "strip_size_kb": 0, 00:13:13.740 "state": "online", 00:13:13.740 "raid_level": "raid1", 00:13:13.740 "superblock": true, 00:13:13.740 "num_base_bdevs": 4, 00:13:13.740 "num_base_bdevs_discovered": 4, 00:13:13.740 "num_base_bdevs_operational": 4, 00:13:13.740 "base_bdevs_list": [ 00:13:13.740 { 00:13:13.740 "name": "BaseBdev1", 00:13:13.740 "uuid": "3d466766-46ab-5366-a943-9b89db05c26b", 00:13:13.740 "is_configured": true, 00:13:13.740 "data_offset": 2048, 00:13:13.740 "data_size": 63488 00:13:13.740 }, 00:13:13.740 { 00:13:13.740 "name": "BaseBdev2", 00:13:13.740 "uuid": "d80ac726-81b8-51c0-8503-ad570847fcb1", 00:13:13.740 "is_configured": true, 00:13:13.740 "data_offset": 2048, 00:13:13.740 "data_size": 63488 00:13:13.740 }, 00:13:13.740 { 00:13:13.740 "name": "BaseBdev3", 00:13:13.740 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:13.740 "is_configured": true, 00:13:13.740 "data_offset": 2048, 00:13:13.740 "data_size": 63488 00:13:13.740 }, 00:13:13.740 { 00:13:13.740 "name": "BaseBdev4", 00:13:13.740 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:13.740 "is_configured": true, 00:13:13.740 "data_offset": 2048, 00:13:13.740 "data_size": 63488 00:13:13.740 } 00:13:13.740 ] 00:13:13.740 }' 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.740 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:14.308 [2024-11-20 13:26:55.804745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.308 [2024-11-20 13:26:55.888176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.308 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.308 "name": "raid_bdev1", 00:13:14.308 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:14.308 "strip_size_kb": 0, 00:13:14.308 "state": "online", 00:13:14.308 "raid_level": "raid1", 00:13:14.308 "superblock": true, 00:13:14.308 "num_base_bdevs": 4, 00:13:14.308 "num_base_bdevs_discovered": 3, 00:13:14.308 "num_base_bdevs_operational": 3, 00:13:14.308 "base_bdevs_list": [ 00:13:14.308 { 00:13:14.308 "name": null, 00:13:14.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.308 "is_configured": false, 00:13:14.308 "data_offset": 0, 00:13:14.308 "data_size": 63488 00:13:14.308 }, 00:13:14.308 { 00:13:14.308 "name": "BaseBdev2", 00:13:14.308 "uuid": "d80ac726-81b8-51c0-8503-ad570847fcb1", 00:13:14.308 "is_configured": true, 00:13:14.308 "data_offset": 2048, 00:13:14.308 "data_size": 63488 00:13:14.308 }, 00:13:14.308 { 00:13:14.308 "name": "BaseBdev3", 00:13:14.308 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:14.308 "is_configured": true, 00:13:14.308 "data_offset": 2048, 00:13:14.308 "data_size": 63488 00:13:14.308 }, 00:13:14.308 { 00:13:14.309 "name": "BaseBdev4", 00:13:14.309 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:14.309 "is_configured": true, 00:13:14.309 "data_offset": 2048, 00:13:14.309 "data_size": 63488 00:13:14.309 } 00:13:14.309 ] 00:13:14.309 }' 00:13:14.309 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.309 13:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.569 [2024-11-20 13:26:56.010424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:14.569 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:14.569 Zero copy mechanism will not be used. 00:13:14.569 Running I/O for 60 seconds... 00:13:14.828 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:14.828 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.828 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.828 [2024-11-20 13:26:56.375114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:14.828 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.828 13:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:14.828 [2024-11-20 13:26:56.465622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:13:14.828 [2024-11-20 13:26:56.468140] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:15.087 [2024-11-20 13:26:56.587639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:15.087 [2024-11-20 13:26:56.588268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:15.087 [2024-11-20 13:26:56.717249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:15.087 [2024-11-20 13:26:56.718188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:15.603 151.00 IOPS, 453.00 MiB/s [2024-11-20T13:26:57.271Z] [2024-11-20 13:26:57.056319] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:15.603 [2024-11-20 13:26:57.177226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.863 "name": "raid_bdev1", 00:13:15.863 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:15.863 "strip_size_kb": 0, 00:13:15.863 "state": "online", 00:13:15.863 "raid_level": "raid1", 00:13:15.863 "superblock": true, 00:13:15.863 "num_base_bdevs": 4, 00:13:15.863 "num_base_bdevs_discovered": 4, 00:13:15.863 "num_base_bdevs_operational": 4, 00:13:15.863 "process": { 00:13:15.863 "type": "rebuild", 00:13:15.863 "target": "spare", 00:13:15.863 "progress": { 00:13:15.863 "blocks": 12288, 00:13:15.863 "percent": 19 00:13:15.863 } 00:13:15.863 }, 00:13:15.863 "base_bdevs_list": [ 00:13:15.863 { 00:13:15.863 "name": "spare", 00:13:15.863 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:15.863 "is_configured": true, 00:13:15.863 "data_offset": 2048, 00:13:15.863 "data_size": 63488 00:13:15.863 }, 00:13:15.863 { 00:13:15.863 "name": "BaseBdev2", 00:13:15.863 "uuid": "d80ac726-81b8-51c0-8503-ad570847fcb1", 00:13:15.863 "is_configured": true, 00:13:15.863 "data_offset": 2048, 00:13:15.863 "data_size": 63488 00:13:15.863 }, 00:13:15.863 { 00:13:15.863 "name": "BaseBdev3", 00:13:15.863 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:15.863 "is_configured": true, 00:13:15.863 "data_offset": 2048, 00:13:15.863 "data_size": 63488 00:13:15.863 }, 00:13:15.863 { 00:13:15.863 "name": "BaseBdev4", 00:13:15.863 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:15.863 "is_configured": true, 00:13:15.863 "data_offset": 2048, 00:13:15.863 "data_size": 63488 00:13:15.863 } 00:13:15.863 ] 00:13:15.863 }' 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.863 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.123 [2024-11-20 13:26:57.530829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:16.123 [2024-11-20 13:26:57.531629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:16.123 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.123 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:16.123 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.123 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.123 [2024-11-20 13:26:57.561239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.123 [2024-11-20 13:26:57.660918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:16.123 [2024-11-20 13:26:57.661824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:16.123 [2024-11-20 13:26:57.773114] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:16.123 [2024-11-20 13:26:57.786172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.123 [2024-11-20 13:26:57.786374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.123 [2024-11-20 13:26:57.786413] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:16.393 [2024-11-20 13:26:57.816521] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.393 "name": "raid_bdev1", 00:13:16.393 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:16.393 "strip_size_kb": 0, 00:13:16.393 "state": "online", 00:13:16.393 "raid_level": "raid1", 00:13:16.393 "superblock": true, 00:13:16.393 "num_base_bdevs": 4, 00:13:16.393 "num_base_bdevs_discovered": 3, 00:13:16.393 "num_base_bdevs_operational": 3, 00:13:16.393 "base_bdevs_list": [ 00:13:16.393 { 00:13:16.393 "name": null, 00:13:16.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.393 "is_configured": false, 00:13:16.393 "data_offset": 0, 00:13:16.393 "data_size": 63488 00:13:16.393 }, 00:13:16.393 { 00:13:16.393 "name": "BaseBdev2", 00:13:16.393 "uuid": "d80ac726-81b8-51c0-8503-ad570847fcb1", 00:13:16.393 "is_configured": true, 00:13:16.393 "data_offset": 2048, 00:13:16.393 "data_size": 63488 00:13:16.393 }, 00:13:16.393 { 00:13:16.393 "name": "BaseBdev3", 00:13:16.393 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:16.393 "is_configured": true, 00:13:16.393 "data_offset": 2048, 00:13:16.393 "data_size": 63488 00:13:16.393 }, 00:13:16.393 { 00:13:16.393 "name": "BaseBdev4", 00:13:16.393 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:16.393 "is_configured": true, 00:13:16.393 "data_offset": 2048, 00:13:16.393 "data_size": 63488 00:13:16.393 } 00:13:16.393 ] 00:13:16.393 }' 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.393 13:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.699 124.50 IOPS, 373.50 MiB/s [2024-11-20T13:26:58.367Z] 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.699 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.699 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.699 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.699 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.699 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.699 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.699 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.699 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.699 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.700 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.700 "name": "raid_bdev1", 00:13:16.700 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:16.700 "strip_size_kb": 0, 00:13:16.700 "state": "online", 00:13:16.700 "raid_level": "raid1", 00:13:16.700 "superblock": true, 00:13:16.700 "num_base_bdevs": 4, 00:13:16.700 "num_base_bdevs_discovered": 3, 00:13:16.700 "num_base_bdevs_operational": 3, 00:13:16.700 "base_bdevs_list": [ 00:13:16.700 { 00:13:16.700 "name": null, 00:13:16.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.700 "is_configured": false, 00:13:16.700 "data_offset": 0, 00:13:16.700 "data_size": 63488 00:13:16.700 }, 00:13:16.700 { 00:13:16.700 "name": "BaseBdev2", 00:13:16.700 "uuid": "d80ac726-81b8-51c0-8503-ad570847fcb1", 00:13:16.700 "is_configured": true, 00:13:16.700 "data_offset": 2048, 00:13:16.700 "data_size": 63488 00:13:16.700 }, 00:13:16.700 { 00:13:16.700 "name": "BaseBdev3", 00:13:16.700 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:16.700 "is_configured": true, 00:13:16.700 "data_offset": 2048, 00:13:16.700 "data_size": 63488 00:13:16.700 }, 00:13:16.700 { 00:13:16.700 "name": "BaseBdev4", 00:13:16.700 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:16.700 "is_configured": true, 00:13:16.700 "data_offset": 2048, 00:13:16.700 "data_size": 63488 00:13:16.700 } 00:13:16.700 ] 00:13:16.700 }' 00:13:16.700 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.960 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.960 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.960 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.960 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.960 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.960 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.960 [2024-11-20 13:26:58.461676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.960 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.960 13:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:16.960 [2024-11-20 13:26:58.535558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:16.960 [2024-11-20 13:26:58.538169] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.219 [2024-11-20 13:26:58.658033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.219 [2024-11-20 13:26:58.658776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.219 [2024-11-20 13:26:58.790228] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:17.219 [2024-11-20 13:26:58.791154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:18.046 127.67 IOPS, 383.00 MiB/s [2024-11-20T13:26:59.714Z] 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.046 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.046 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.046 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.046 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.046 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.046 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.046 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.046 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.046 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.047 "name": "raid_bdev1", 00:13:18.047 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:18.047 "strip_size_kb": 0, 00:13:18.047 "state": "online", 00:13:18.047 "raid_level": "raid1", 00:13:18.047 "superblock": true, 00:13:18.047 "num_base_bdevs": 4, 00:13:18.047 "num_base_bdevs_discovered": 4, 00:13:18.047 "num_base_bdevs_operational": 4, 00:13:18.047 "process": { 00:13:18.047 "type": "rebuild", 00:13:18.047 "target": "spare", 00:13:18.047 "progress": { 00:13:18.047 "blocks": 12288, 00:13:18.047 "percent": 19 00:13:18.047 } 00:13:18.047 }, 00:13:18.047 "base_bdevs_list": [ 00:13:18.047 { 00:13:18.047 "name": "spare", 00:13:18.047 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:18.047 "is_configured": true, 00:13:18.047 "data_offset": 2048, 00:13:18.047 "data_size": 63488 00:13:18.047 }, 00:13:18.047 { 00:13:18.047 "name": "BaseBdev2", 00:13:18.047 "uuid": "d80ac726-81b8-51c0-8503-ad570847fcb1", 00:13:18.047 "is_configured": true, 00:13:18.047 "data_offset": 2048, 00:13:18.047 "data_size": 63488 00:13:18.047 }, 00:13:18.047 { 00:13:18.047 "name": "BaseBdev3", 00:13:18.047 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:18.047 "is_configured": true, 00:13:18.047 "data_offset": 2048, 00:13:18.047 "data_size": 63488 00:13:18.047 }, 00:13:18.047 { 00:13:18.047 "name": "BaseBdev4", 00:13:18.047 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:18.047 "is_configured": true, 00:13:18.047 "data_offset": 2048, 00:13:18.047 "data_size": 63488 00:13:18.047 } 00:13:18.047 ] 00:13:18.047 }' 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:18.047 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.047 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.047 [2024-11-20 13:26:59.668967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.047 [2024-11-20 13:26:59.671058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:18.047 [2024-11-20 13:26:59.671976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:18.306 [2024-11-20 13:26:59.874845] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:13:18.307 [2024-11-20 13:26:59.875098] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.307 "name": "raid_bdev1", 00:13:18.307 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:18.307 "strip_size_kb": 0, 00:13:18.307 "state": "online", 00:13:18.307 "raid_level": "raid1", 00:13:18.307 "superblock": true, 00:13:18.307 "num_base_bdevs": 4, 00:13:18.307 "num_base_bdevs_discovered": 3, 00:13:18.307 "num_base_bdevs_operational": 3, 00:13:18.307 "process": { 00:13:18.307 "type": "rebuild", 00:13:18.307 "target": "spare", 00:13:18.307 "progress": { 00:13:18.307 "blocks": 16384, 00:13:18.307 "percent": 25 00:13:18.307 } 00:13:18.307 }, 00:13:18.307 "base_bdevs_list": [ 00:13:18.307 { 00:13:18.307 "name": "spare", 00:13:18.307 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:18.307 "is_configured": true, 00:13:18.307 "data_offset": 2048, 00:13:18.307 "data_size": 63488 00:13:18.307 }, 00:13:18.307 { 00:13:18.307 "name": null, 00:13:18.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.307 "is_configured": false, 00:13:18.307 "data_offset": 0, 00:13:18.307 "data_size": 63488 00:13:18.307 }, 00:13:18.307 { 00:13:18.307 "name": "BaseBdev3", 00:13:18.307 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:18.307 "is_configured": true, 00:13:18.307 "data_offset": 2048, 00:13:18.307 "data_size": 63488 00:13:18.307 }, 00:13:18.307 { 00:13:18.307 "name": "BaseBdev4", 00:13:18.307 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:18.307 "is_configured": true, 00:13:18.307 "data_offset": 2048, 00:13:18.307 "data_size": 63488 00:13:18.307 } 00:13:18.307 ] 00:13:18.307 }' 00:13:18.307 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.595 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.595 13:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.595 117.75 IOPS, 353.25 MiB/s [2024-11-20T13:27:00.263Z] 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=409 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.595 "name": "raid_bdev1", 00:13:18.595 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:18.595 "strip_size_kb": 0, 00:13:18.595 "state": "online", 00:13:18.595 "raid_level": "raid1", 00:13:18.595 "superblock": true, 00:13:18.595 "num_base_bdevs": 4, 00:13:18.595 "num_base_bdevs_discovered": 3, 00:13:18.595 "num_base_bdevs_operational": 3, 00:13:18.595 "process": { 00:13:18.595 "type": "rebuild", 00:13:18.595 "target": "spare", 00:13:18.595 "progress": { 00:13:18.595 "blocks": 18432, 00:13:18.595 "percent": 29 00:13:18.595 } 00:13:18.595 }, 00:13:18.595 "base_bdevs_list": [ 00:13:18.595 { 00:13:18.595 "name": "spare", 00:13:18.595 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:18.595 "is_configured": true, 00:13:18.595 "data_offset": 2048, 00:13:18.595 "data_size": 63488 00:13:18.595 }, 00:13:18.595 { 00:13:18.595 "name": null, 00:13:18.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.595 "is_configured": false, 00:13:18.595 "data_offset": 0, 00:13:18.595 "data_size": 63488 00:13:18.595 }, 00:13:18.595 { 00:13:18.595 "name": "BaseBdev3", 00:13:18.595 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:18.595 "is_configured": true, 00:13:18.595 "data_offset": 2048, 00:13:18.595 "data_size": 63488 00:13:18.595 }, 00:13:18.595 { 00:13:18.595 "name": "BaseBdev4", 00:13:18.595 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:18.595 "is_configured": true, 00:13:18.595 "data_offset": 2048, 00:13:18.595 "data_size": 63488 00:13:18.595 } 00:13:18.595 ] 00:13:18.595 }' 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.595 [2024-11-20 13:27:00.119453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.595 13:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:18.854 [2024-11-20 13:27:00.324208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:19.679 106.60 IOPS, 319.80 MiB/s [2024-11-20T13:27:01.347Z] [2024-11-20 13:27:01.172799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.679 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.679 "name": "raid_bdev1", 00:13:19.679 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:19.679 "strip_size_kb": 0, 00:13:19.679 "state": "online", 00:13:19.679 "raid_level": "raid1", 00:13:19.679 "superblock": true, 00:13:19.679 "num_base_bdevs": 4, 00:13:19.679 "num_base_bdevs_discovered": 3, 00:13:19.679 "num_base_bdevs_operational": 3, 00:13:19.679 "process": { 00:13:19.679 "type": "rebuild", 00:13:19.679 "target": "spare", 00:13:19.679 "progress": { 00:13:19.679 "blocks": 38912, 00:13:19.679 "percent": 61 00:13:19.679 } 00:13:19.679 }, 00:13:19.679 "base_bdevs_list": [ 00:13:19.679 { 00:13:19.679 "name": "spare", 00:13:19.679 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:19.679 "is_configured": true, 00:13:19.679 "data_offset": 2048, 00:13:19.679 "data_size": 63488 00:13:19.679 }, 00:13:19.679 { 00:13:19.679 "name": null, 00:13:19.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.679 "is_configured": false, 00:13:19.679 "data_offset": 0, 00:13:19.679 "data_size": 63488 00:13:19.679 }, 00:13:19.679 { 00:13:19.679 "name": "BaseBdev3", 00:13:19.679 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:19.679 "is_configured": true, 00:13:19.679 "data_offset": 2048, 00:13:19.679 "data_size": 63488 00:13:19.679 }, 00:13:19.679 { 00:13:19.679 "name": "BaseBdev4", 00:13:19.679 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:19.679 "is_configured": true, 00:13:19.679 "data_offset": 2048, 00:13:19.679 "data_size": 63488 00:13:19.679 } 00:13:19.679 ] 00:13:19.679 }' 00:13:19.680 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.680 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.680 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.680 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.680 13:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.244 [2024-11-20 13:27:01.824333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:20.502 [2024-11-20 13:27:01.936090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:20.759 96.17 IOPS, 288.50 MiB/s [2024-11-20T13:27:02.427Z] 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.759 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.759 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.759 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.760 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.760 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.760 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.760 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.760 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.760 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.760 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.760 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.760 "name": "raid_bdev1", 00:13:20.760 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:20.760 "strip_size_kb": 0, 00:13:20.760 "state": "online", 00:13:20.760 "raid_level": "raid1", 00:13:20.760 "superblock": true, 00:13:20.760 "num_base_bdevs": 4, 00:13:20.760 "num_base_bdevs_discovered": 3, 00:13:20.760 "num_base_bdevs_operational": 3, 00:13:20.760 "process": { 00:13:20.760 "type": "rebuild", 00:13:20.760 "target": "spare", 00:13:20.760 "progress": { 00:13:20.760 "blocks": 59392, 00:13:20.760 "percent": 93 00:13:20.760 } 00:13:20.760 }, 00:13:20.760 "base_bdevs_list": [ 00:13:20.760 { 00:13:20.760 "name": "spare", 00:13:20.760 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:20.760 "is_configured": true, 00:13:20.760 "data_offset": 2048, 00:13:20.760 "data_size": 63488 00:13:20.760 }, 00:13:20.760 { 00:13:20.760 "name": null, 00:13:20.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.760 "is_configured": false, 00:13:20.760 "data_offset": 0, 00:13:20.760 "data_size": 63488 00:13:20.760 }, 00:13:20.760 { 00:13:20.760 "name": "BaseBdev3", 00:13:20.760 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:20.760 "is_configured": true, 00:13:20.760 "data_offset": 2048, 00:13:20.760 "data_size": 63488 00:13:20.760 }, 00:13:20.760 { 00:13:20.760 "name": "BaseBdev4", 00:13:20.760 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:20.760 "is_configured": true, 00:13:20.760 "data_offset": 2048, 00:13:20.760 "data_size": 63488 00:13:20.760 } 00:13:20.760 ] 00:13:20.760 }' 00:13:20.760 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.018 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.018 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.018 [2024-11-20 13:27:02.483970] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:21.018 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.018 13:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.018 [2024-11-20 13:27:02.583677] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:21.018 [2024-11-20 13:27:02.587111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.844 87.86 IOPS, 263.57 MiB/s [2024-11-20T13:27:03.512Z] 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.844 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.844 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.844 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.844 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.844 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.844 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.844 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.844 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.844 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.844 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.101 "name": "raid_bdev1", 00:13:22.101 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:22.101 "strip_size_kb": 0, 00:13:22.101 "state": "online", 00:13:22.101 "raid_level": "raid1", 00:13:22.101 "superblock": true, 00:13:22.101 "num_base_bdevs": 4, 00:13:22.101 "num_base_bdevs_discovered": 3, 00:13:22.101 "num_base_bdevs_operational": 3, 00:13:22.101 "base_bdevs_list": [ 00:13:22.101 { 00:13:22.101 "name": "spare", 00:13:22.101 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:22.101 "is_configured": true, 00:13:22.101 "data_offset": 2048, 00:13:22.101 "data_size": 63488 00:13:22.101 }, 00:13:22.101 { 00:13:22.101 "name": null, 00:13:22.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.101 "is_configured": false, 00:13:22.101 "data_offset": 0, 00:13:22.101 "data_size": 63488 00:13:22.101 }, 00:13:22.101 { 00:13:22.101 "name": "BaseBdev3", 00:13:22.101 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:22.101 "is_configured": true, 00:13:22.101 "data_offset": 2048, 00:13:22.101 "data_size": 63488 00:13:22.101 }, 00:13:22.101 { 00:13:22.101 "name": "BaseBdev4", 00:13:22.101 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:22.101 "is_configured": true, 00:13:22.101 "data_offset": 2048, 00:13:22.101 "data_size": 63488 00:13:22.101 } 00:13:22.101 ] 00:13:22.101 }' 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.101 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.101 "name": "raid_bdev1", 00:13:22.101 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:22.101 "strip_size_kb": 0, 00:13:22.101 "state": "online", 00:13:22.101 "raid_level": "raid1", 00:13:22.101 "superblock": true, 00:13:22.101 "num_base_bdevs": 4, 00:13:22.101 "num_base_bdevs_discovered": 3, 00:13:22.101 "num_base_bdevs_operational": 3, 00:13:22.101 "base_bdevs_list": [ 00:13:22.101 { 00:13:22.101 "name": "spare", 00:13:22.101 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:22.101 "is_configured": true, 00:13:22.101 "data_offset": 2048, 00:13:22.101 "data_size": 63488 00:13:22.101 }, 00:13:22.101 { 00:13:22.101 "name": null, 00:13:22.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.101 "is_configured": false, 00:13:22.101 "data_offset": 0, 00:13:22.101 "data_size": 63488 00:13:22.101 }, 00:13:22.101 { 00:13:22.101 "name": "BaseBdev3", 00:13:22.101 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:22.101 "is_configured": true, 00:13:22.101 "data_offset": 2048, 00:13:22.101 "data_size": 63488 00:13:22.101 }, 00:13:22.101 { 00:13:22.101 "name": "BaseBdev4", 00:13:22.102 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:22.102 "is_configured": true, 00:13:22.102 "data_offset": 2048, 00:13:22.102 "data_size": 63488 00:13:22.102 } 00:13:22.102 ] 00:13:22.102 }' 00:13:22.102 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.102 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.102 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.359 "name": "raid_bdev1", 00:13:22.359 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:22.359 "strip_size_kb": 0, 00:13:22.359 "state": "online", 00:13:22.359 "raid_level": "raid1", 00:13:22.359 "superblock": true, 00:13:22.359 "num_base_bdevs": 4, 00:13:22.359 "num_base_bdevs_discovered": 3, 00:13:22.359 "num_base_bdevs_operational": 3, 00:13:22.359 "base_bdevs_list": [ 00:13:22.359 { 00:13:22.359 "name": "spare", 00:13:22.359 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:22.359 "is_configured": true, 00:13:22.359 "data_offset": 2048, 00:13:22.359 "data_size": 63488 00:13:22.359 }, 00:13:22.359 { 00:13:22.359 "name": null, 00:13:22.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.359 "is_configured": false, 00:13:22.359 "data_offset": 0, 00:13:22.359 "data_size": 63488 00:13:22.359 }, 00:13:22.359 { 00:13:22.359 "name": "BaseBdev3", 00:13:22.359 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:22.359 "is_configured": true, 00:13:22.359 "data_offset": 2048, 00:13:22.359 "data_size": 63488 00:13:22.359 }, 00:13:22.359 { 00:13:22.359 "name": "BaseBdev4", 00:13:22.359 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:22.359 "is_configured": true, 00:13:22.359 "data_offset": 2048, 00:13:22.359 "data_size": 63488 00:13:22.359 } 00:13:22.359 ] 00:13:22.359 }' 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.359 13:27:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.925 81.38 IOPS, 244.12 MiB/s [2024-11-20T13:27:04.593Z] 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.925 [2024-11-20 13:27:04.299295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.925 [2024-11-20 13:27:04.299378] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.925 00:13:22.925 Latency(us) 00:13:22.925 [2024-11-20T13:27:04.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.925 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:22.925 raid_bdev1 : 8.36 79.05 237.14 0.00 0.00 17534.04 465.05 124547.02 00:13:22.925 [2024-11-20T13:27:04.593Z] =================================================================================================================== 00:13:22.925 [2024-11-20T13:27:04.593Z] Total : 79.05 237.14 0.00 0.00 17534.04 465.05 124547.02 00:13:22.925 [2024-11-20 13:27:04.365168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.925 [2024-11-20 13:27:04.365397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.925 [2024-11-20 13:27:04.365617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.925 { 00:13:22.925 "results": [ 00:13:22.925 { 00:13:22.925 "job": "raid_bdev1", 00:13:22.925 "core_mask": "0x1", 00:13:22.925 "workload": "randrw", 00:13:22.925 "percentage": 50, 00:13:22.925 "status": "finished", 00:13:22.925 "queue_depth": 2, 00:13:22.925 "io_size": 3145728, 00:13:22.925 "runtime": 8.362075, 00:13:22.925 "iops": 79.04736563592171, 00:13:22.925 "mibps": 237.1420969077651, 00:13:22.925 "io_failed": 0, 00:13:22.925 "io_timeout": 0, 00:13:22.925 "avg_latency_us": 17534.040031974844, 00:13:22.925 "min_latency_us": 465.0480349344978, 00:13:22.925 "max_latency_us": 124547.01834061135 00:13:22.925 } 00:13:22.925 ], 00:13:22.925 "core_count": 1 00:13:22.925 } 00:13:22.925 [2024-11-20 13:27:04.365711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:22.925 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:23.185 /dev/nbd0 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.185 1+0 records in 00:13:23.185 1+0 records out 00:13:23.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384031 s, 10.7 MB/s 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.185 13:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:23.444 /dev/nbd1 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.444 1+0 records in 00:13:23.444 1+0 records out 00:13:23.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043449 s, 9.4 MB/s 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.444 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:23.445 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.445 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.445 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:23.704 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:23.704 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.704 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:23.704 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.704 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:23.704 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.704 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.964 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:24.223 /dev/nbd1 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.223 1+0 records in 00:13:24.223 1+0 records out 00:13:24.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312136 s, 13.1 MB/s 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.223 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:24.224 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.224 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.224 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.224 13:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:24.482 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:24.482 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:24.482 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:24.482 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.482 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.483 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:24.483 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:24.483 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.483 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:24.483 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.483 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:24.483 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.483 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.483 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.483 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.050 [2024-11-20 13:27:06.456886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:25.050 [2024-11-20 13:27:06.456986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.050 [2024-11-20 13:27:06.457035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:25.050 [2024-11-20 13:27:06.457051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.050 [2024-11-20 13:27:06.459836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.050 [2024-11-20 13:27:06.459905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:25.050 [2024-11-20 13:27:06.460050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:25.050 [2024-11-20 13:27:06.460116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.050 [2024-11-20 13:27:06.460272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.050 [2024-11-20 13:27:06.460406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:25.050 spare 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.050 [2024-11-20 13:27:06.560344] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:25.050 [2024-11-20 13:27:06.560427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:25.050 [2024-11-20 13:27:06.560858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:13:25.050 [2024-11-20 13:27:06.561113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:25.050 [2024-11-20 13:27:06.561128] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:25.050 [2024-11-20 13:27:06.561389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.050 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.050 "name": "raid_bdev1", 00:13:25.050 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:25.050 "strip_size_kb": 0, 00:13:25.050 "state": "online", 00:13:25.050 "raid_level": "raid1", 00:13:25.050 "superblock": true, 00:13:25.050 "num_base_bdevs": 4, 00:13:25.050 "num_base_bdevs_discovered": 3, 00:13:25.050 "num_base_bdevs_operational": 3, 00:13:25.050 "base_bdevs_list": [ 00:13:25.050 { 00:13:25.050 "name": "spare", 00:13:25.050 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:25.051 "is_configured": true, 00:13:25.051 "data_offset": 2048, 00:13:25.051 "data_size": 63488 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "name": null, 00:13:25.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.051 "is_configured": false, 00:13:25.051 "data_offset": 2048, 00:13:25.051 "data_size": 63488 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "name": "BaseBdev3", 00:13:25.051 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:25.051 "is_configured": true, 00:13:25.051 "data_offset": 2048, 00:13:25.051 "data_size": 63488 00:13:25.051 }, 00:13:25.051 { 00:13:25.051 "name": "BaseBdev4", 00:13:25.051 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:25.051 "is_configured": true, 00:13:25.051 "data_offset": 2048, 00:13:25.051 "data_size": 63488 00:13:25.051 } 00:13:25.051 ] 00:13:25.051 }' 00:13:25.051 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.051 13:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.620 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.620 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.620 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.620 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.620 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.620 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.620 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.620 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.620 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.620 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.620 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.620 "name": "raid_bdev1", 00:13:25.620 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:25.620 "strip_size_kb": 0, 00:13:25.620 "state": "online", 00:13:25.620 "raid_level": "raid1", 00:13:25.620 "superblock": true, 00:13:25.620 "num_base_bdevs": 4, 00:13:25.621 "num_base_bdevs_discovered": 3, 00:13:25.621 "num_base_bdevs_operational": 3, 00:13:25.621 "base_bdevs_list": [ 00:13:25.621 { 00:13:25.621 "name": "spare", 00:13:25.621 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:25.621 "is_configured": true, 00:13:25.621 "data_offset": 2048, 00:13:25.621 "data_size": 63488 00:13:25.621 }, 00:13:25.621 { 00:13:25.621 "name": null, 00:13:25.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.621 "is_configured": false, 00:13:25.621 "data_offset": 2048, 00:13:25.621 "data_size": 63488 00:13:25.621 }, 00:13:25.621 { 00:13:25.621 "name": "BaseBdev3", 00:13:25.621 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:25.621 "is_configured": true, 00:13:25.621 "data_offset": 2048, 00:13:25.621 "data_size": 63488 00:13:25.621 }, 00:13:25.621 { 00:13:25.621 "name": "BaseBdev4", 00:13:25.621 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:25.621 "is_configured": true, 00:13:25.621 "data_offset": 2048, 00:13:25.621 "data_size": 63488 00:13:25.621 } 00:13:25.621 ] 00:13:25.621 }' 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.621 [2024-11-20 13:27:07.260461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.621 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.879 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.879 "name": "raid_bdev1", 00:13:25.879 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:25.879 "strip_size_kb": 0, 00:13:25.879 "state": "online", 00:13:25.879 "raid_level": "raid1", 00:13:25.879 "superblock": true, 00:13:25.879 "num_base_bdevs": 4, 00:13:25.879 "num_base_bdevs_discovered": 2, 00:13:25.879 "num_base_bdevs_operational": 2, 00:13:25.879 "base_bdevs_list": [ 00:13:25.879 { 00:13:25.879 "name": null, 00:13:25.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.879 "is_configured": false, 00:13:25.879 "data_offset": 0, 00:13:25.879 "data_size": 63488 00:13:25.879 }, 00:13:25.879 { 00:13:25.879 "name": null, 00:13:25.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.879 "is_configured": false, 00:13:25.879 "data_offset": 2048, 00:13:25.879 "data_size": 63488 00:13:25.879 }, 00:13:25.879 { 00:13:25.879 "name": "BaseBdev3", 00:13:25.879 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:25.879 "is_configured": true, 00:13:25.879 "data_offset": 2048, 00:13:25.879 "data_size": 63488 00:13:25.879 }, 00:13:25.879 { 00:13:25.879 "name": "BaseBdev4", 00:13:25.879 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:25.879 "is_configured": true, 00:13:25.879 "data_offset": 2048, 00:13:25.879 "data_size": 63488 00:13:25.879 } 00:13:25.879 ] 00:13:25.879 }' 00:13:25.879 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.879 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.137 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:26.137 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.137 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.137 [2024-11-20 13:27:07.763974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.137 [2024-11-20 13:27:07.764368] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:26.137 [2024-11-20 13:27:07.764449] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:26.137 [2024-11-20 13:27:07.764527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.137 [2024-11-20 13:27:07.769469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:13:26.137 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.137 13:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:26.137 [2024-11-20 13:27:07.771953] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.512 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.512 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.512 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.512 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.512 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.512 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.513 "name": "raid_bdev1", 00:13:27.513 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:27.513 "strip_size_kb": 0, 00:13:27.513 "state": "online", 00:13:27.513 "raid_level": "raid1", 00:13:27.513 "superblock": true, 00:13:27.513 "num_base_bdevs": 4, 00:13:27.513 "num_base_bdevs_discovered": 3, 00:13:27.513 "num_base_bdevs_operational": 3, 00:13:27.513 "process": { 00:13:27.513 "type": "rebuild", 00:13:27.513 "target": "spare", 00:13:27.513 "progress": { 00:13:27.513 "blocks": 20480, 00:13:27.513 "percent": 32 00:13:27.513 } 00:13:27.513 }, 00:13:27.513 "base_bdevs_list": [ 00:13:27.513 { 00:13:27.513 "name": "spare", 00:13:27.513 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:27.513 "is_configured": true, 00:13:27.513 "data_offset": 2048, 00:13:27.513 "data_size": 63488 00:13:27.513 }, 00:13:27.513 { 00:13:27.513 "name": null, 00:13:27.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.513 "is_configured": false, 00:13:27.513 "data_offset": 2048, 00:13:27.513 "data_size": 63488 00:13:27.513 }, 00:13:27.513 { 00:13:27.513 "name": "BaseBdev3", 00:13:27.513 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:27.513 "is_configured": true, 00:13:27.513 "data_offset": 2048, 00:13:27.513 "data_size": 63488 00:13:27.513 }, 00:13:27.513 { 00:13:27.513 "name": "BaseBdev4", 00:13:27.513 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:27.513 "is_configured": true, 00:13:27.513 "data_offset": 2048, 00:13:27.513 "data_size": 63488 00:13:27.513 } 00:13:27.513 ] 00:13:27.513 }' 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.513 [2024-11-20 13:27:08.948418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.513 [2024-11-20 13:27:08.979194] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:27.513 [2024-11-20 13:27:08.979566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.513 [2024-11-20 13:27:08.979663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.513 [2024-11-20 13:27:08.979725] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.513 13:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.513 13:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.513 13:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.513 "name": "raid_bdev1", 00:13:27.513 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:27.513 "strip_size_kb": 0, 00:13:27.513 "state": "online", 00:13:27.513 "raid_level": "raid1", 00:13:27.513 "superblock": true, 00:13:27.513 "num_base_bdevs": 4, 00:13:27.513 "num_base_bdevs_discovered": 2, 00:13:27.513 "num_base_bdevs_operational": 2, 00:13:27.513 "base_bdevs_list": [ 00:13:27.513 { 00:13:27.513 "name": null, 00:13:27.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.513 "is_configured": false, 00:13:27.513 "data_offset": 0, 00:13:27.513 "data_size": 63488 00:13:27.513 }, 00:13:27.513 { 00:13:27.513 "name": null, 00:13:27.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.513 "is_configured": false, 00:13:27.513 "data_offset": 2048, 00:13:27.513 "data_size": 63488 00:13:27.513 }, 00:13:27.513 { 00:13:27.513 "name": "BaseBdev3", 00:13:27.513 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:27.513 "is_configured": true, 00:13:27.513 "data_offset": 2048, 00:13:27.513 "data_size": 63488 00:13:27.513 }, 00:13:27.513 { 00:13:27.513 "name": "BaseBdev4", 00:13:27.513 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:27.513 "is_configured": true, 00:13:27.513 "data_offset": 2048, 00:13:27.513 "data_size": 63488 00:13:27.513 } 00:13:27.513 ] 00:13:27.513 }' 00:13:27.513 13:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.513 13:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.086 13:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:28.086 13:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.086 13:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.086 [2024-11-20 13:27:09.505179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:28.086 [2024-11-20 13:27:09.505380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.086 [2024-11-20 13:27:09.505417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:28.086 [2024-11-20 13:27:09.505432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.086 [2024-11-20 13:27:09.505978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.086 [2024-11-20 13:27:09.506038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:28.086 [2024-11-20 13:27:09.506165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:28.086 [2024-11-20 13:27:09.506206] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:28.086 [2024-11-20 13:27:09.506218] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:28.086 [2024-11-20 13:27:09.506253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.086 [2024-11-20 13:27:09.511518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:13:28.086 spare 00:13:28.086 13:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.086 13:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:28.086 [2024-11-20 13:27:09.514038] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.023 "name": "raid_bdev1", 00:13:29.023 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:29.023 "strip_size_kb": 0, 00:13:29.023 "state": "online", 00:13:29.023 "raid_level": "raid1", 00:13:29.023 "superblock": true, 00:13:29.023 "num_base_bdevs": 4, 00:13:29.023 "num_base_bdevs_discovered": 3, 00:13:29.023 "num_base_bdevs_operational": 3, 00:13:29.023 "process": { 00:13:29.023 "type": "rebuild", 00:13:29.023 "target": "spare", 00:13:29.023 "progress": { 00:13:29.023 "blocks": 20480, 00:13:29.023 "percent": 32 00:13:29.023 } 00:13:29.023 }, 00:13:29.023 "base_bdevs_list": [ 00:13:29.023 { 00:13:29.023 "name": "spare", 00:13:29.023 "uuid": "f9346d60-704a-5522-8553-666343fde9f1", 00:13:29.023 "is_configured": true, 00:13:29.023 "data_offset": 2048, 00:13:29.023 "data_size": 63488 00:13:29.023 }, 00:13:29.023 { 00:13:29.023 "name": null, 00:13:29.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.023 "is_configured": false, 00:13:29.023 "data_offset": 2048, 00:13:29.023 "data_size": 63488 00:13:29.023 }, 00:13:29.023 { 00:13:29.023 "name": "BaseBdev3", 00:13:29.023 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:29.023 "is_configured": true, 00:13:29.023 "data_offset": 2048, 00:13:29.023 "data_size": 63488 00:13:29.023 }, 00:13:29.023 { 00:13:29.023 "name": "BaseBdev4", 00:13:29.023 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:29.023 "is_configured": true, 00:13:29.023 "data_offset": 2048, 00:13:29.023 "data_size": 63488 00:13:29.023 } 00:13:29.023 ] 00:13:29.023 }' 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.023 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.023 [2024-11-20 13:27:10.673915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.283 [2024-11-20 13:27:10.720902] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:29.283 [2024-11-20 13:27:10.721177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.283 [2024-11-20 13:27:10.721215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.283 [2024-11-20 13:27:10.721225] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.283 "name": "raid_bdev1", 00:13:29.283 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:29.283 "strip_size_kb": 0, 00:13:29.283 "state": "online", 00:13:29.283 "raid_level": "raid1", 00:13:29.283 "superblock": true, 00:13:29.283 "num_base_bdevs": 4, 00:13:29.283 "num_base_bdevs_discovered": 2, 00:13:29.283 "num_base_bdevs_operational": 2, 00:13:29.283 "base_bdevs_list": [ 00:13:29.283 { 00:13:29.283 "name": null, 00:13:29.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.283 "is_configured": false, 00:13:29.283 "data_offset": 0, 00:13:29.283 "data_size": 63488 00:13:29.283 }, 00:13:29.283 { 00:13:29.283 "name": null, 00:13:29.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.283 "is_configured": false, 00:13:29.283 "data_offset": 2048, 00:13:29.283 "data_size": 63488 00:13:29.283 }, 00:13:29.283 { 00:13:29.283 "name": "BaseBdev3", 00:13:29.283 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:29.283 "is_configured": true, 00:13:29.283 "data_offset": 2048, 00:13:29.283 "data_size": 63488 00:13:29.283 }, 00:13:29.283 { 00:13:29.283 "name": "BaseBdev4", 00:13:29.283 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:29.283 "is_configured": true, 00:13:29.283 "data_offset": 2048, 00:13:29.283 "data_size": 63488 00:13:29.283 } 00:13:29.283 ] 00:13:29.283 }' 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.283 13:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.543 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.543 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.543 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.543 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.543 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.890 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.890 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.890 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.890 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.890 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.890 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.890 "name": "raid_bdev1", 00:13:29.890 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:29.890 "strip_size_kb": 0, 00:13:29.890 "state": "online", 00:13:29.890 "raid_level": "raid1", 00:13:29.890 "superblock": true, 00:13:29.890 "num_base_bdevs": 4, 00:13:29.890 "num_base_bdevs_discovered": 2, 00:13:29.890 "num_base_bdevs_operational": 2, 00:13:29.890 "base_bdevs_list": [ 00:13:29.890 { 00:13:29.890 "name": null, 00:13:29.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.890 "is_configured": false, 00:13:29.890 "data_offset": 0, 00:13:29.890 "data_size": 63488 00:13:29.890 }, 00:13:29.891 { 00:13:29.891 "name": null, 00:13:29.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.891 "is_configured": false, 00:13:29.891 "data_offset": 2048, 00:13:29.891 "data_size": 63488 00:13:29.891 }, 00:13:29.891 { 00:13:29.891 "name": "BaseBdev3", 00:13:29.891 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:29.891 "is_configured": true, 00:13:29.891 "data_offset": 2048, 00:13:29.891 "data_size": 63488 00:13:29.891 }, 00:13:29.891 { 00:13:29.891 "name": "BaseBdev4", 00:13:29.891 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:29.891 "is_configured": true, 00:13:29.891 "data_offset": 2048, 00:13:29.891 "data_size": 63488 00:13:29.891 } 00:13:29.891 ] 00:13:29.891 }' 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.891 [2024-11-20 13:27:11.357939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.891 [2024-11-20 13:27:11.358085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.891 [2024-11-20 13:27:11.358125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:29.891 [2024-11-20 13:27:11.358138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.891 [2024-11-20 13:27:11.358660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.891 [2024-11-20 13:27:11.358706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.891 [2024-11-20 13:27:11.358825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:29.891 [2024-11-20 13:27:11.358843] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:29.891 [2024-11-20 13:27:11.358859] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:29.891 [2024-11-20 13:27:11.358886] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:29.891 BaseBdev1 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.891 13:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.850 "name": "raid_bdev1", 00:13:30.850 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:30.850 "strip_size_kb": 0, 00:13:30.850 "state": "online", 00:13:30.850 "raid_level": "raid1", 00:13:30.850 "superblock": true, 00:13:30.850 "num_base_bdevs": 4, 00:13:30.850 "num_base_bdevs_discovered": 2, 00:13:30.850 "num_base_bdevs_operational": 2, 00:13:30.850 "base_bdevs_list": [ 00:13:30.850 { 00:13:30.850 "name": null, 00:13:30.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.850 "is_configured": false, 00:13:30.850 "data_offset": 0, 00:13:30.850 "data_size": 63488 00:13:30.850 }, 00:13:30.850 { 00:13:30.850 "name": null, 00:13:30.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.850 "is_configured": false, 00:13:30.850 "data_offset": 2048, 00:13:30.850 "data_size": 63488 00:13:30.850 }, 00:13:30.850 { 00:13:30.850 "name": "BaseBdev3", 00:13:30.850 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:30.850 "is_configured": true, 00:13:30.850 "data_offset": 2048, 00:13:30.850 "data_size": 63488 00:13:30.850 }, 00:13:30.850 { 00:13:30.850 "name": "BaseBdev4", 00:13:30.850 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:30.850 "is_configured": true, 00:13:30.850 "data_offset": 2048, 00:13:30.850 "data_size": 63488 00:13:30.850 } 00:13:30.850 ] 00:13:30.850 }' 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.850 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.417 "name": "raid_bdev1", 00:13:31.417 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:31.417 "strip_size_kb": 0, 00:13:31.417 "state": "online", 00:13:31.417 "raid_level": "raid1", 00:13:31.417 "superblock": true, 00:13:31.417 "num_base_bdevs": 4, 00:13:31.417 "num_base_bdevs_discovered": 2, 00:13:31.417 "num_base_bdevs_operational": 2, 00:13:31.417 "base_bdevs_list": [ 00:13:31.417 { 00:13:31.417 "name": null, 00:13:31.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.417 "is_configured": false, 00:13:31.417 "data_offset": 0, 00:13:31.417 "data_size": 63488 00:13:31.417 }, 00:13:31.417 { 00:13:31.417 "name": null, 00:13:31.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.417 "is_configured": false, 00:13:31.417 "data_offset": 2048, 00:13:31.417 "data_size": 63488 00:13:31.417 }, 00:13:31.417 { 00:13:31.417 "name": "BaseBdev3", 00:13:31.417 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:31.417 "is_configured": true, 00:13:31.417 "data_offset": 2048, 00:13:31.417 "data_size": 63488 00:13:31.417 }, 00:13:31.417 { 00:13:31.417 "name": "BaseBdev4", 00:13:31.417 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:31.417 "is_configured": true, 00:13:31.417 "data_offset": 2048, 00:13:31.417 "data_size": 63488 00:13:31.417 } 00:13:31.417 ] 00:13:31.417 }' 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.417 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.418 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:31.418 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.418 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:31.418 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.418 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:31.418 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.418 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.418 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.418 13:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.418 [2024-11-20 13:27:13.003808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.418 [2024-11-20 13:27:13.004180] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:31.418 [2024-11-20 13:27:13.004209] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:31.418 request: 00:13:31.418 { 00:13:31.418 "base_bdev": "BaseBdev1", 00:13:31.418 "raid_bdev": "raid_bdev1", 00:13:31.418 "method": "bdev_raid_add_base_bdev", 00:13:31.418 "req_id": 1 00:13:31.418 } 00:13:31.418 Got JSON-RPC error response 00:13:31.418 response: 00:13:31.418 { 00:13:31.418 "code": -22, 00:13:31.418 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:31.418 } 00:13:31.418 13:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:31.418 13:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:31.418 13:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:31.418 13:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:31.418 13:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:31.418 13:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.354 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.612 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.612 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.612 "name": "raid_bdev1", 00:13:32.612 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:32.612 "strip_size_kb": 0, 00:13:32.612 "state": "online", 00:13:32.612 "raid_level": "raid1", 00:13:32.612 "superblock": true, 00:13:32.612 "num_base_bdevs": 4, 00:13:32.612 "num_base_bdevs_discovered": 2, 00:13:32.612 "num_base_bdevs_operational": 2, 00:13:32.612 "base_bdevs_list": [ 00:13:32.612 { 00:13:32.612 "name": null, 00:13:32.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.612 "is_configured": false, 00:13:32.612 "data_offset": 0, 00:13:32.612 "data_size": 63488 00:13:32.612 }, 00:13:32.612 { 00:13:32.612 "name": null, 00:13:32.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.612 "is_configured": false, 00:13:32.612 "data_offset": 2048, 00:13:32.612 "data_size": 63488 00:13:32.612 }, 00:13:32.612 { 00:13:32.612 "name": "BaseBdev3", 00:13:32.612 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:32.612 "is_configured": true, 00:13:32.612 "data_offset": 2048, 00:13:32.612 "data_size": 63488 00:13:32.612 }, 00:13:32.612 { 00:13:32.612 "name": "BaseBdev4", 00:13:32.612 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:32.612 "is_configured": true, 00:13:32.612 "data_offset": 2048, 00:13:32.612 "data_size": 63488 00:13:32.612 } 00:13:32.612 ] 00:13:32.612 }' 00:13:32.612 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.612 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.871 "name": "raid_bdev1", 00:13:32.871 "uuid": "7f9495a9-c3d2-4da3-a0ca-684dd9b083de", 00:13:32.871 "strip_size_kb": 0, 00:13:32.871 "state": "online", 00:13:32.871 "raid_level": "raid1", 00:13:32.871 "superblock": true, 00:13:32.871 "num_base_bdevs": 4, 00:13:32.871 "num_base_bdevs_discovered": 2, 00:13:32.871 "num_base_bdevs_operational": 2, 00:13:32.871 "base_bdevs_list": [ 00:13:32.871 { 00:13:32.871 "name": null, 00:13:32.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.871 "is_configured": false, 00:13:32.871 "data_offset": 0, 00:13:32.871 "data_size": 63488 00:13:32.871 }, 00:13:32.871 { 00:13:32.871 "name": null, 00:13:32.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.871 "is_configured": false, 00:13:32.871 "data_offset": 2048, 00:13:32.871 "data_size": 63488 00:13:32.871 }, 00:13:32.871 { 00:13:32.871 "name": "BaseBdev3", 00:13:32.871 "uuid": "8fbf2e80-6b50-5ff8-a96f-a7fe1461e0c9", 00:13:32.871 "is_configured": true, 00:13:32.871 "data_offset": 2048, 00:13:32.871 "data_size": 63488 00:13:32.871 }, 00:13:32.871 { 00:13:32.871 "name": "BaseBdev4", 00:13:32.871 "uuid": "89ed1e46-5498-5f43-ae9c-6e84686f63db", 00:13:32.871 "is_configured": true, 00:13:32.871 "data_offset": 2048, 00:13:32.871 "data_size": 63488 00:13:32.871 } 00:13:32.871 ] 00:13:32.871 }' 00:13:32.871 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89483 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 89483 ']' 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 89483 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89483 00:13:33.131 killing process with pid 89483 00:13:33.131 Received shutdown signal, test time was about 18.685872 seconds 00:13:33.131 00:13:33.131 Latency(us) 00:13:33.131 [2024-11-20T13:27:14.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.131 [2024-11-20T13:27:14.799Z] =================================================================================================================== 00:13:33.131 [2024-11-20T13:27:14.799Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89483' 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 89483 00:13:33.131 [2024-11-20 13:27:14.662959] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.131 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 89483 00:13:33.131 [2024-11-20 13:27:14.663167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.131 [2024-11-20 13:27:14.663290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.131 [2024-11-20 13:27:14.663308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:33.131 [2024-11-20 13:27:14.718211] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.391 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:33.391 00:13:33.391 real 0m20.851s 00:13:33.391 user 0m28.178s 00:13:33.391 sys 0m2.541s 00:13:33.391 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.391 ************************************ 00:13:33.391 END TEST raid_rebuild_test_sb_io 00:13:33.391 ************************************ 00:13:33.391 13:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.391 13:27:15 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:33.391 13:27:15 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:33.391 13:27:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:33.391 13:27:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.391 13:27:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.391 ************************************ 00:13:33.391 START TEST raid5f_state_function_test 00:13:33.391 ************************************ 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:33.391 Process raid pid: 90195 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90195 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90195' 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90195 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 90195 ']' 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.391 13:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.650 [2024-11-20 13:27:15.137652] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:33.650 [2024-11-20 13:27:15.137852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.650 [2024-11-20 13:27:15.284226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.910 [2024-11-20 13:27:15.330917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.910 [2024-11-20 13:27:15.379587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.910 [2024-11-20 13:27:15.379754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.481 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.481 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:34.481 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:34.481 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.481 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.741 [2024-11-20 13:27:16.152319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:34.741 [2024-11-20 13:27:16.152523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:34.741 [2024-11-20 13:27:16.152537] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.741 [2024-11-20 13:27:16.152552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.741 [2024-11-20 13:27:16.152561] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:34.741 [2024-11-20 13:27:16.152575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.741 "name": "Existed_Raid", 00:13:34.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.741 "strip_size_kb": 64, 00:13:34.741 "state": "configuring", 00:13:34.741 "raid_level": "raid5f", 00:13:34.741 "superblock": false, 00:13:34.741 "num_base_bdevs": 3, 00:13:34.741 "num_base_bdevs_discovered": 0, 00:13:34.741 "num_base_bdevs_operational": 3, 00:13:34.741 "base_bdevs_list": [ 00:13:34.741 { 00:13:34.741 "name": "BaseBdev1", 00:13:34.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.741 "is_configured": false, 00:13:34.741 "data_offset": 0, 00:13:34.741 "data_size": 0 00:13:34.741 }, 00:13:34.741 { 00:13:34.741 "name": "BaseBdev2", 00:13:34.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.741 "is_configured": false, 00:13:34.741 "data_offset": 0, 00:13:34.741 "data_size": 0 00:13:34.741 }, 00:13:34.741 { 00:13:34.741 "name": "BaseBdev3", 00:13:34.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.741 "is_configured": false, 00:13:34.741 "data_offset": 0, 00:13:34.741 "data_size": 0 00:13:34.741 } 00:13:34.741 ] 00:13:34.741 }' 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.741 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.002 [2024-11-20 13:27:16.627741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.002 [2024-11-20 13:27:16.627799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.002 [2024-11-20 13:27:16.639800] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:35.002 [2024-11-20 13:27:16.639881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:35.002 [2024-11-20 13:27:16.639893] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.002 [2024-11-20 13:27:16.639905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.002 [2024-11-20 13:27:16.639913] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:35.002 [2024-11-20 13:27:16.639924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.002 [2024-11-20 13:27:16.662496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.002 BaseBdev1 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.002 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.262 [ 00:13:35.262 { 00:13:35.262 "name": "BaseBdev1", 00:13:35.262 "aliases": [ 00:13:35.262 "65f6d218-2ae1-46ff-92c6-b00ad9ed907c" 00:13:35.262 ], 00:13:35.262 "product_name": "Malloc disk", 00:13:35.262 "block_size": 512, 00:13:35.262 "num_blocks": 65536, 00:13:35.262 "uuid": "65f6d218-2ae1-46ff-92c6-b00ad9ed907c", 00:13:35.262 "assigned_rate_limits": { 00:13:35.262 "rw_ios_per_sec": 0, 00:13:35.262 "rw_mbytes_per_sec": 0, 00:13:35.262 "r_mbytes_per_sec": 0, 00:13:35.262 "w_mbytes_per_sec": 0 00:13:35.262 }, 00:13:35.262 "claimed": true, 00:13:35.262 "claim_type": "exclusive_write", 00:13:35.262 "zoned": false, 00:13:35.262 "supported_io_types": { 00:13:35.262 "read": true, 00:13:35.262 "write": true, 00:13:35.262 "unmap": true, 00:13:35.262 "flush": true, 00:13:35.262 "reset": true, 00:13:35.262 "nvme_admin": false, 00:13:35.262 "nvme_io": false, 00:13:35.262 "nvme_io_md": false, 00:13:35.262 "write_zeroes": true, 00:13:35.262 "zcopy": true, 00:13:35.262 "get_zone_info": false, 00:13:35.262 "zone_management": false, 00:13:35.262 "zone_append": false, 00:13:35.262 "compare": false, 00:13:35.262 "compare_and_write": false, 00:13:35.262 "abort": true, 00:13:35.262 "seek_hole": false, 00:13:35.262 "seek_data": false, 00:13:35.262 "copy": true, 00:13:35.262 "nvme_iov_md": false 00:13:35.262 }, 00:13:35.262 "memory_domains": [ 00:13:35.262 { 00:13:35.262 "dma_device_id": "system", 00:13:35.262 "dma_device_type": 1 00:13:35.262 }, 00:13:35.262 { 00:13:35.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.262 "dma_device_type": 2 00:13:35.262 } 00:13:35.262 ], 00:13:35.262 "driver_specific": {} 00:13:35.262 } 00:13:35.262 ] 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.262 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.262 "name": "Existed_Raid", 00:13:35.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.262 "strip_size_kb": 64, 00:13:35.262 "state": "configuring", 00:13:35.262 "raid_level": "raid5f", 00:13:35.262 "superblock": false, 00:13:35.262 "num_base_bdevs": 3, 00:13:35.262 "num_base_bdevs_discovered": 1, 00:13:35.263 "num_base_bdevs_operational": 3, 00:13:35.263 "base_bdevs_list": [ 00:13:35.263 { 00:13:35.263 "name": "BaseBdev1", 00:13:35.263 "uuid": "65f6d218-2ae1-46ff-92c6-b00ad9ed907c", 00:13:35.263 "is_configured": true, 00:13:35.263 "data_offset": 0, 00:13:35.263 "data_size": 65536 00:13:35.263 }, 00:13:35.263 { 00:13:35.263 "name": "BaseBdev2", 00:13:35.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.263 "is_configured": false, 00:13:35.263 "data_offset": 0, 00:13:35.263 "data_size": 0 00:13:35.263 }, 00:13:35.263 { 00:13:35.263 "name": "BaseBdev3", 00:13:35.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.263 "is_configured": false, 00:13:35.263 "data_offset": 0, 00:13:35.263 "data_size": 0 00:13:35.263 } 00:13:35.263 ] 00:13:35.263 }' 00:13:35.263 13:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.263 13:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.522 [2024-11-20 13:27:17.169818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.522 [2024-11-20 13:27:17.170032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.522 [2024-11-20 13:27:17.177874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.522 [2024-11-20 13:27:17.180342] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.522 [2024-11-20 13:27:17.180500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.522 [2024-11-20 13:27:17.180542] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:35.522 [2024-11-20 13:27:17.180595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.522 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.781 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.781 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.782 "name": "Existed_Raid", 00:13:35.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.782 "strip_size_kb": 64, 00:13:35.782 "state": "configuring", 00:13:35.782 "raid_level": "raid5f", 00:13:35.782 "superblock": false, 00:13:35.782 "num_base_bdevs": 3, 00:13:35.782 "num_base_bdevs_discovered": 1, 00:13:35.782 "num_base_bdevs_operational": 3, 00:13:35.782 "base_bdevs_list": [ 00:13:35.782 { 00:13:35.782 "name": "BaseBdev1", 00:13:35.782 "uuid": "65f6d218-2ae1-46ff-92c6-b00ad9ed907c", 00:13:35.782 "is_configured": true, 00:13:35.782 "data_offset": 0, 00:13:35.782 "data_size": 65536 00:13:35.782 }, 00:13:35.782 { 00:13:35.782 "name": "BaseBdev2", 00:13:35.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.782 "is_configured": false, 00:13:35.782 "data_offset": 0, 00:13:35.782 "data_size": 0 00:13:35.782 }, 00:13:35.782 { 00:13:35.782 "name": "BaseBdev3", 00:13:35.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.782 "is_configured": false, 00:13:35.782 "data_offset": 0, 00:13:35.782 "data_size": 0 00:13:35.782 } 00:13:35.782 ] 00:13:35.782 }' 00:13:35.782 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.782 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.041 [2024-11-20 13:27:17.677108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.041 BaseBdev2 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.041 [ 00:13:36.041 { 00:13:36.041 "name": "BaseBdev2", 00:13:36.041 "aliases": [ 00:13:36.041 "8e54b9c9-d025-4380-a8a1-451c245b5a22" 00:13:36.041 ], 00:13:36.041 "product_name": "Malloc disk", 00:13:36.041 "block_size": 512, 00:13:36.041 "num_blocks": 65536, 00:13:36.041 "uuid": "8e54b9c9-d025-4380-a8a1-451c245b5a22", 00:13:36.041 "assigned_rate_limits": { 00:13:36.041 "rw_ios_per_sec": 0, 00:13:36.041 "rw_mbytes_per_sec": 0, 00:13:36.041 "r_mbytes_per_sec": 0, 00:13:36.041 "w_mbytes_per_sec": 0 00:13:36.041 }, 00:13:36.041 "claimed": true, 00:13:36.041 "claim_type": "exclusive_write", 00:13:36.041 "zoned": false, 00:13:36.041 "supported_io_types": { 00:13:36.041 "read": true, 00:13:36.041 "write": true, 00:13:36.041 "unmap": true, 00:13:36.041 "flush": true, 00:13:36.041 "reset": true, 00:13:36.041 "nvme_admin": false, 00:13:36.041 "nvme_io": false, 00:13:36.041 "nvme_io_md": false, 00:13:36.041 "write_zeroes": true, 00:13:36.041 "zcopy": true, 00:13:36.041 "get_zone_info": false, 00:13:36.041 "zone_management": false, 00:13:36.041 "zone_append": false, 00:13:36.041 "compare": false, 00:13:36.041 "compare_and_write": false, 00:13:36.041 "abort": true, 00:13:36.041 "seek_hole": false, 00:13:36.041 "seek_data": false, 00:13:36.041 "copy": true, 00:13:36.041 "nvme_iov_md": false 00:13:36.041 }, 00:13:36.041 "memory_domains": [ 00:13:36.041 { 00:13:36.041 "dma_device_id": "system", 00:13:36.041 "dma_device_type": 1 00:13:36.041 }, 00:13:36.041 { 00:13:36.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.041 "dma_device_type": 2 00:13:36.041 } 00:13:36.041 ], 00:13:36.041 "driver_specific": {} 00:13:36.041 } 00:13:36.041 ] 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.041 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.301 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.301 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.301 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.301 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.301 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.301 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.301 "name": "Existed_Raid", 00:13:36.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.301 "strip_size_kb": 64, 00:13:36.301 "state": "configuring", 00:13:36.301 "raid_level": "raid5f", 00:13:36.301 "superblock": false, 00:13:36.301 "num_base_bdevs": 3, 00:13:36.301 "num_base_bdevs_discovered": 2, 00:13:36.301 "num_base_bdevs_operational": 3, 00:13:36.301 "base_bdevs_list": [ 00:13:36.301 { 00:13:36.301 "name": "BaseBdev1", 00:13:36.301 "uuid": "65f6d218-2ae1-46ff-92c6-b00ad9ed907c", 00:13:36.301 "is_configured": true, 00:13:36.301 "data_offset": 0, 00:13:36.301 "data_size": 65536 00:13:36.301 }, 00:13:36.301 { 00:13:36.301 "name": "BaseBdev2", 00:13:36.301 "uuid": "8e54b9c9-d025-4380-a8a1-451c245b5a22", 00:13:36.301 "is_configured": true, 00:13:36.301 "data_offset": 0, 00:13:36.301 "data_size": 65536 00:13:36.301 }, 00:13:36.301 { 00:13:36.301 "name": "BaseBdev3", 00:13:36.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.301 "is_configured": false, 00:13:36.301 "data_offset": 0, 00:13:36.301 "data_size": 0 00:13:36.301 } 00:13:36.301 ] 00:13:36.301 }' 00:13:36.301 13:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.301 13:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.560 [2024-11-20 13:27:18.184649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.560 [2024-11-20 13:27:18.184840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:36.560 [2024-11-20 13:27:18.184888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:36.560 [2024-11-20 13:27:18.185294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:36.560 [2024-11-20 13:27:18.185950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:36.560 [2024-11-20 13:27:18.186037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:36.560 [2024-11-20 13:27:18.186400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.560 BaseBdev3 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.560 [ 00:13:36.560 { 00:13:36.560 "name": "BaseBdev3", 00:13:36.560 "aliases": [ 00:13:36.560 "dbe9bf2b-57ab-4cb2-9deb-40b58c508cf3" 00:13:36.560 ], 00:13:36.560 "product_name": "Malloc disk", 00:13:36.560 "block_size": 512, 00:13:36.560 "num_blocks": 65536, 00:13:36.560 "uuid": "dbe9bf2b-57ab-4cb2-9deb-40b58c508cf3", 00:13:36.560 "assigned_rate_limits": { 00:13:36.560 "rw_ios_per_sec": 0, 00:13:36.560 "rw_mbytes_per_sec": 0, 00:13:36.560 "r_mbytes_per_sec": 0, 00:13:36.560 "w_mbytes_per_sec": 0 00:13:36.560 }, 00:13:36.560 "claimed": true, 00:13:36.560 "claim_type": "exclusive_write", 00:13:36.560 "zoned": false, 00:13:36.560 "supported_io_types": { 00:13:36.560 "read": true, 00:13:36.560 "write": true, 00:13:36.560 "unmap": true, 00:13:36.560 "flush": true, 00:13:36.560 "reset": true, 00:13:36.560 "nvme_admin": false, 00:13:36.560 "nvme_io": false, 00:13:36.560 "nvme_io_md": false, 00:13:36.560 "write_zeroes": true, 00:13:36.560 "zcopy": true, 00:13:36.560 "get_zone_info": false, 00:13:36.560 "zone_management": false, 00:13:36.560 "zone_append": false, 00:13:36.560 "compare": false, 00:13:36.560 "compare_and_write": false, 00:13:36.560 "abort": true, 00:13:36.560 "seek_hole": false, 00:13:36.560 "seek_data": false, 00:13:36.560 "copy": true, 00:13:36.560 "nvme_iov_md": false 00:13:36.560 }, 00:13:36.560 "memory_domains": [ 00:13:36.560 { 00:13:36.560 "dma_device_id": "system", 00:13:36.560 "dma_device_type": 1 00:13:36.560 }, 00:13:36.560 { 00:13:36.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.560 "dma_device_type": 2 00:13:36.560 } 00:13:36.560 ], 00:13:36.560 "driver_specific": {} 00:13:36.560 } 00:13:36.560 ] 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.560 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.846 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.846 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.846 "name": "Existed_Raid", 00:13:36.846 "uuid": "85600dcf-6aa3-4cb0-9097-a1c90c7654d1", 00:13:36.846 "strip_size_kb": 64, 00:13:36.846 "state": "online", 00:13:36.846 "raid_level": "raid5f", 00:13:36.846 "superblock": false, 00:13:36.846 "num_base_bdevs": 3, 00:13:36.846 "num_base_bdevs_discovered": 3, 00:13:36.846 "num_base_bdevs_operational": 3, 00:13:36.846 "base_bdevs_list": [ 00:13:36.846 { 00:13:36.846 "name": "BaseBdev1", 00:13:36.846 "uuid": "65f6d218-2ae1-46ff-92c6-b00ad9ed907c", 00:13:36.846 "is_configured": true, 00:13:36.846 "data_offset": 0, 00:13:36.846 "data_size": 65536 00:13:36.846 }, 00:13:36.846 { 00:13:36.846 "name": "BaseBdev2", 00:13:36.846 "uuid": "8e54b9c9-d025-4380-a8a1-451c245b5a22", 00:13:36.846 "is_configured": true, 00:13:36.846 "data_offset": 0, 00:13:36.846 "data_size": 65536 00:13:36.846 }, 00:13:36.846 { 00:13:36.846 "name": "BaseBdev3", 00:13:36.846 "uuid": "dbe9bf2b-57ab-4cb2-9deb-40b58c508cf3", 00:13:36.846 "is_configured": true, 00:13:36.846 "data_offset": 0, 00:13:36.846 "data_size": 65536 00:13:36.846 } 00:13:36.846 ] 00:13:36.846 }' 00:13:36.846 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.846 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.106 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:37.106 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:37.106 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.106 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.106 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.107 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.107 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.107 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:37.107 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.107 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.107 [2024-11-20 13:27:18.708276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.107 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.107 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.107 "name": "Existed_Raid", 00:13:37.107 "aliases": [ 00:13:37.107 "85600dcf-6aa3-4cb0-9097-a1c90c7654d1" 00:13:37.107 ], 00:13:37.107 "product_name": "Raid Volume", 00:13:37.107 "block_size": 512, 00:13:37.107 "num_blocks": 131072, 00:13:37.107 "uuid": "85600dcf-6aa3-4cb0-9097-a1c90c7654d1", 00:13:37.107 "assigned_rate_limits": { 00:13:37.107 "rw_ios_per_sec": 0, 00:13:37.107 "rw_mbytes_per_sec": 0, 00:13:37.107 "r_mbytes_per_sec": 0, 00:13:37.107 "w_mbytes_per_sec": 0 00:13:37.107 }, 00:13:37.107 "claimed": false, 00:13:37.107 "zoned": false, 00:13:37.107 "supported_io_types": { 00:13:37.107 "read": true, 00:13:37.107 "write": true, 00:13:37.107 "unmap": false, 00:13:37.107 "flush": false, 00:13:37.107 "reset": true, 00:13:37.107 "nvme_admin": false, 00:13:37.107 "nvme_io": false, 00:13:37.107 "nvme_io_md": false, 00:13:37.107 "write_zeroes": true, 00:13:37.107 "zcopy": false, 00:13:37.107 "get_zone_info": false, 00:13:37.107 "zone_management": false, 00:13:37.107 "zone_append": false, 00:13:37.107 "compare": false, 00:13:37.107 "compare_and_write": false, 00:13:37.107 "abort": false, 00:13:37.107 "seek_hole": false, 00:13:37.107 "seek_data": false, 00:13:37.107 "copy": false, 00:13:37.107 "nvme_iov_md": false 00:13:37.107 }, 00:13:37.107 "driver_specific": { 00:13:37.107 "raid": { 00:13:37.107 "uuid": "85600dcf-6aa3-4cb0-9097-a1c90c7654d1", 00:13:37.107 "strip_size_kb": 64, 00:13:37.107 "state": "online", 00:13:37.107 "raid_level": "raid5f", 00:13:37.107 "superblock": false, 00:13:37.107 "num_base_bdevs": 3, 00:13:37.107 "num_base_bdevs_discovered": 3, 00:13:37.107 "num_base_bdevs_operational": 3, 00:13:37.107 "base_bdevs_list": [ 00:13:37.107 { 00:13:37.107 "name": "BaseBdev1", 00:13:37.107 "uuid": "65f6d218-2ae1-46ff-92c6-b00ad9ed907c", 00:13:37.107 "is_configured": true, 00:13:37.107 "data_offset": 0, 00:13:37.107 "data_size": 65536 00:13:37.107 }, 00:13:37.107 { 00:13:37.107 "name": "BaseBdev2", 00:13:37.107 "uuid": "8e54b9c9-d025-4380-a8a1-451c245b5a22", 00:13:37.107 "is_configured": true, 00:13:37.107 "data_offset": 0, 00:13:37.107 "data_size": 65536 00:13:37.107 }, 00:13:37.107 { 00:13:37.107 "name": "BaseBdev3", 00:13:37.107 "uuid": "dbe9bf2b-57ab-4cb2-9deb-40b58c508cf3", 00:13:37.107 "is_configured": true, 00:13:37.107 "data_offset": 0, 00:13:37.107 "data_size": 65536 00:13:37.107 } 00:13:37.107 ] 00:13:37.107 } 00:13:37.107 } 00:13:37.107 }' 00:13:37.107 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:37.366 BaseBdev2 00:13:37.366 BaseBdev3' 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.366 [2024-11-20 13:27:18.983775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.366 13:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.366 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.366 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.366 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.366 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.367 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.626 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.626 "name": "Existed_Raid", 00:13:37.626 "uuid": "85600dcf-6aa3-4cb0-9097-a1c90c7654d1", 00:13:37.626 "strip_size_kb": 64, 00:13:37.626 "state": "online", 00:13:37.626 "raid_level": "raid5f", 00:13:37.626 "superblock": false, 00:13:37.626 "num_base_bdevs": 3, 00:13:37.626 "num_base_bdevs_discovered": 2, 00:13:37.626 "num_base_bdevs_operational": 2, 00:13:37.626 "base_bdevs_list": [ 00:13:37.626 { 00:13:37.626 "name": null, 00:13:37.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.626 "is_configured": false, 00:13:37.626 "data_offset": 0, 00:13:37.626 "data_size": 65536 00:13:37.626 }, 00:13:37.626 { 00:13:37.626 "name": "BaseBdev2", 00:13:37.626 "uuid": "8e54b9c9-d025-4380-a8a1-451c245b5a22", 00:13:37.626 "is_configured": true, 00:13:37.626 "data_offset": 0, 00:13:37.626 "data_size": 65536 00:13:37.626 }, 00:13:37.626 { 00:13:37.626 "name": "BaseBdev3", 00:13:37.626 "uuid": "dbe9bf2b-57ab-4cb2-9deb-40b58c508cf3", 00:13:37.626 "is_configured": true, 00:13:37.626 "data_offset": 0, 00:13:37.626 "data_size": 65536 00:13:37.626 } 00:13:37.626 ] 00:13:37.626 }' 00:13:37.626 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.626 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.887 [2024-11-20 13:27:19.483951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:37.887 [2024-11-20 13:27:19.484113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.887 [2024-11-20 13:27:19.496618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.887 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.146 [2024-11-20 13:27:19.552639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:38.146 [2024-11-20 13:27:19.552735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.146 BaseBdev2 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.146 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.147 [ 00:13:38.147 { 00:13:38.147 "name": "BaseBdev2", 00:13:38.147 "aliases": [ 00:13:38.147 "987e83e8-bba7-4b7c-8c40-177c5cc06a6c" 00:13:38.147 ], 00:13:38.147 "product_name": "Malloc disk", 00:13:38.147 "block_size": 512, 00:13:38.147 "num_blocks": 65536, 00:13:38.147 "uuid": "987e83e8-bba7-4b7c-8c40-177c5cc06a6c", 00:13:38.147 "assigned_rate_limits": { 00:13:38.147 "rw_ios_per_sec": 0, 00:13:38.147 "rw_mbytes_per_sec": 0, 00:13:38.147 "r_mbytes_per_sec": 0, 00:13:38.147 "w_mbytes_per_sec": 0 00:13:38.147 }, 00:13:38.147 "claimed": false, 00:13:38.147 "zoned": false, 00:13:38.147 "supported_io_types": { 00:13:38.147 "read": true, 00:13:38.147 "write": true, 00:13:38.147 "unmap": true, 00:13:38.147 "flush": true, 00:13:38.147 "reset": true, 00:13:38.147 "nvme_admin": false, 00:13:38.147 "nvme_io": false, 00:13:38.147 "nvme_io_md": false, 00:13:38.147 "write_zeroes": true, 00:13:38.147 "zcopy": true, 00:13:38.147 "get_zone_info": false, 00:13:38.147 "zone_management": false, 00:13:38.147 "zone_append": false, 00:13:38.147 "compare": false, 00:13:38.147 "compare_and_write": false, 00:13:38.147 "abort": true, 00:13:38.147 "seek_hole": false, 00:13:38.147 "seek_data": false, 00:13:38.147 "copy": true, 00:13:38.147 "nvme_iov_md": false 00:13:38.147 }, 00:13:38.147 "memory_domains": [ 00:13:38.147 { 00:13:38.147 "dma_device_id": "system", 00:13:38.147 "dma_device_type": 1 00:13:38.147 }, 00:13:38.147 { 00:13:38.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.147 "dma_device_type": 2 00:13:38.147 } 00:13:38.147 ], 00:13:38.147 "driver_specific": {} 00:13:38.147 } 00:13:38.147 ] 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.147 BaseBdev3 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.147 [ 00:13:38.147 { 00:13:38.147 "name": "BaseBdev3", 00:13:38.147 "aliases": [ 00:13:38.147 "24ecfed8-22df-4296-84cc-60fb44b919b4" 00:13:38.147 ], 00:13:38.147 "product_name": "Malloc disk", 00:13:38.147 "block_size": 512, 00:13:38.147 "num_blocks": 65536, 00:13:38.147 "uuid": "24ecfed8-22df-4296-84cc-60fb44b919b4", 00:13:38.147 "assigned_rate_limits": { 00:13:38.147 "rw_ios_per_sec": 0, 00:13:38.147 "rw_mbytes_per_sec": 0, 00:13:38.147 "r_mbytes_per_sec": 0, 00:13:38.147 "w_mbytes_per_sec": 0 00:13:38.147 }, 00:13:38.147 "claimed": false, 00:13:38.147 "zoned": false, 00:13:38.147 "supported_io_types": { 00:13:38.147 "read": true, 00:13:38.147 "write": true, 00:13:38.147 "unmap": true, 00:13:38.147 "flush": true, 00:13:38.147 "reset": true, 00:13:38.147 "nvme_admin": false, 00:13:38.147 "nvme_io": false, 00:13:38.147 "nvme_io_md": false, 00:13:38.147 "write_zeroes": true, 00:13:38.147 "zcopy": true, 00:13:38.147 "get_zone_info": false, 00:13:38.147 "zone_management": false, 00:13:38.147 "zone_append": false, 00:13:38.147 "compare": false, 00:13:38.147 "compare_and_write": false, 00:13:38.147 "abort": true, 00:13:38.147 "seek_hole": false, 00:13:38.147 "seek_data": false, 00:13:38.147 "copy": true, 00:13:38.147 "nvme_iov_md": false 00:13:38.147 }, 00:13:38.147 "memory_domains": [ 00:13:38.147 { 00:13:38.147 "dma_device_id": "system", 00:13:38.147 "dma_device_type": 1 00:13:38.147 }, 00:13:38.147 { 00:13:38.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.147 "dma_device_type": 2 00:13:38.147 } 00:13:38.147 ], 00:13:38.147 "driver_specific": {} 00:13:38.147 } 00:13:38.147 ] 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.147 [2024-11-20 13:27:19.710932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.147 [2024-11-20 13:27:19.711101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.147 [2024-11-20 13:27:19.711144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.147 [2024-11-20 13:27:19.713518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.147 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.147 "name": "Existed_Raid", 00:13:38.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.147 "strip_size_kb": 64, 00:13:38.147 "state": "configuring", 00:13:38.147 "raid_level": "raid5f", 00:13:38.147 "superblock": false, 00:13:38.147 "num_base_bdevs": 3, 00:13:38.147 "num_base_bdevs_discovered": 2, 00:13:38.147 "num_base_bdevs_operational": 3, 00:13:38.147 "base_bdevs_list": [ 00:13:38.147 { 00:13:38.147 "name": "BaseBdev1", 00:13:38.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.147 "is_configured": false, 00:13:38.147 "data_offset": 0, 00:13:38.147 "data_size": 0 00:13:38.147 }, 00:13:38.147 { 00:13:38.148 "name": "BaseBdev2", 00:13:38.148 "uuid": "987e83e8-bba7-4b7c-8c40-177c5cc06a6c", 00:13:38.148 "is_configured": true, 00:13:38.148 "data_offset": 0, 00:13:38.148 "data_size": 65536 00:13:38.148 }, 00:13:38.148 { 00:13:38.148 "name": "BaseBdev3", 00:13:38.148 "uuid": "24ecfed8-22df-4296-84cc-60fb44b919b4", 00:13:38.148 "is_configured": true, 00:13:38.148 "data_offset": 0, 00:13:38.148 "data_size": 65536 00:13:38.148 } 00:13:38.148 ] 00:13:38.148 }' 00:13:38.148 13:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.148 13:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.713 [2024-11-20 13:27:20.190165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.713 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.713 "name": "Existed_Raid", 00:13:38.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.714 "strip_size_kb": 64, 00:13:38.714 "state": "configuring", 00:13:38.714 "raid_level": "raid5f", 00:13:38.714 "superblock": false, 00:13:38.714 "num_base_bdevs": 3, 00:13:38.714 "num_base_bdevs_discovered": 1, 00:13:38.714 "num_base_bdevs_operational": 3, 00:13:38.714 "base_bdevs_list": [ 00:13:38.714 { 00:13:38.714 "name": "BaseBdev1", 00:13:38.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.714 "is_configured": false, 00:13:38.714 "data_offset": 0, 00:13:38.714 "data_size": 0 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "name": null, 00:13:38.714 "uuid": "987e83e8-bba7-4b7c-8c40-177c5cc06a6c", 00:13:38.714 "is_configured": false, 00:13:38.714 "data_offset": 0, 00:13:38.714 "data_size": 65536 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "name": "BaseBdev3", 00:13:38.714 "uuid": "24ecfed8-22df-4296-84cc-60fb44b919b4", 00:13:38.714 "is_configured": true, 00:13:38.714 "data_offset": 0, 00:13:38.714 "data_size": 65536 00:13:38.714 } 00:13:38.714 ] 00:13:38.714 }' 00:13:38.714 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.714 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 [2024-11-20 13:27:20.716923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.280 BaseBdev1 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.280 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 [ 00:13:39.280 { 00:13:39.280 "name": "BaseBdev1", 00:13:39.280 "aliases": [ 00:13:39.280 "51737212-a0e9-4139-b096-e74734908a57" 00:13:39.280 ], 00:13:39.280 "product_name": "Malloc disk", 00:13:39.280 "block_size": 512, 00:13:39.280 "num_blocks": 65536, 00:13:39.280 "uuid": "51737212-a0e9-4139-b096-e74734908a57", 00:13:39.280 "assigned_rate_limits": { 00:13:39.280 "rw_ios_per_sec": 0, 00:13:39.280 "rw_mbytes_per_sec": 0, 00:13:39.280 "r_mbytes_per_sec": 0, 00:13:39.280 "w_mbytes_per_sec": 0 00:13:39.280 }, 00:13:39.280 "claimed": true, 00:13:39.280 "claim_type": "exclusive_write", 00:13:39.280 "zoned": false, 00:13:39.280 "supported_io_types": { 00:13:39.280 "read": true, 00:13:39.280 "write": true, 00:13:39.281 "unmap": true, 00:13:39.281 "flush": true, 00:13:39.281 "reset": true, 00:13:39.281 "nvme_admin": false, 00:13:39.281 "nvme_io": false, 00:13:39.281 "nvme_io_md": false, 00:13:39.281 "write_zeroes": true, 00:13:39.281 "zcopy": true, 00:13:39.281 "get_zone_info": false, 00:13:39.281 "zone_management": false, 00:13:39.281 "zone_append": false, 00:13:39.281 "compare": false, 00:13:39.281 "compare_and_write": false, 00:13:39.281 "abort": true, 00:13:39.281 "seek_hole": false, 00:13:39.281 "seek_data": false, 00:13:39.281 "copy": true, 00:13:39.281 "nvme_iov_md": false 00:13:39.281 }, 00:13:39.281 "memory_domains": [ 00:13:39.281 { 00:13:39.281 "dma_device_id": "system", 00:13:39.281 "dma_device_type": 1 00:13:39.281 }, 00:13:39.281 { 00:13:39.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.281 "dma_device_type": 2 00:13:39.281 } 00:13:39.281 ], 00:13:39.281 "driver_specific": {} 00:13:39.281 } 00:13:39.281 ] 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.281 "name": "Existed_Raid", 00:13:39.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.281 "strip_size_kb": 64, 00:13:39.281 "state": "configuring", 00:13:39.281 "raid_level": "raid5f", 00:13:39.281 "superblock": false, 00:13:39.281 "num_base_bdevs": 3, 00:13:39.281 "num_base_bdevs_discovered": 2, 00:13:39.281 "num_base_bdevs_operational": 3, 00:13:39.281 "base_bdevs_list": [ 00:13:39.281 { 00:13:39.281 "name": "BaseBdev1", 00:13:39.281 "uuid": "51737212-a0e9-4139-b096-e74734908a57", 00:13:39.281 "is_configured": true, 00:13:39.281 "data_offset": 0, 00:13:39.281 "data_size": 65536 00:13:39.281 }, 00:13:39.281 { 00:13:39.281 "name": null, 00:13:39.281 "uuid": "987e83e8-bba7-4b7c-8c40-177c5cc06a6c", 00:13:39.281 "is_configured": false, 00:13:39.281 "data_offset": 0, 00:13:39.281 "data_size": 65536 00:13:39.281 }, 00:13:39.281 { 00:13:39.281 "name": "BaseBdev3", 00:13:39.281 "uuid": "24ecfed8-22df-4296-84cc-60fb44b919b4", 00:13:39.281 "is_configured": true, 00:13:39.281 "data_offset": 0, 00:13:39.281 "data_size": 65536 00:13:39.281 } 00:13:39.281 ] 00:13:39.281 }' 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.281 13:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.541 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.541 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.541 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:39.541 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.541 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.801 [2024-11-20 13:27:21.212215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.801 "name": "Existed_Raid", 00:13:39.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.801 "strip_size_kb": 64, 00:13:39.801 "state": "configuring", 00:13:39.801 "raid_level": "raid5f", 00:13:39.801 "superblock": false, 00:13:39.801 "num_base_bdevs": 3, 00:13:39.801 "num_base_bdevs_discovered": 1, 00:13:39.801 "num_base_bdevs_operational": 3, 00:13:39.801 "base_bdevs_list": [ 00:13:39.801 { 00:13:39.801 "name": "BaseBdev1", 00:13:39.801 "uuid": "51737212-a0e9-4139-b096-e74734908a57", 00:13:39.801 "is_configured": true, 00:13:39.801 "data_offset": 0, 00:13:39.801 "data_size": 65536 00:13:39.801 }, 00:13:39.801 { 00:13:39.801 "name": null, 00:13:39.801 "uuid": "987e83e8-bba7-4b7c-8c40-177c5cc06a6c", 00:13:39.801 "is_configured": false, 00:13:39.801 "data_offset": 0, 00:13:39.801 "data_size": 65536 00:13:39.801 }, 00:13:39.801 { 00:13:39.801 "name": null, 00:13:39.801 "uuid": "24ecfed8-22df-4296-84cc-60fb44b919b4", 00:13:39.801 "is_configured": false, 00:13:39.801 "data_offset": 0, 00:13:39.801 "data_size": 65536 00:13:39.801 } 00:13:39.801 ] 00:13:39.801 }' 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.801 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.059 [2024-11-20 13:27:21.715766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.059 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.317 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.317 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.317 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.317 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.317 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.317 "name": "Existed_Raid", 00:13:40.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.317 "strip_size_kb": 64, 00:13:40.317 "state": "configuring", 00:13:40.317 "raid_level": "raid5f", 00:13:40.317 "superblock": false, 00:13:40.317 "num_base_bdevs": 3, 00:13:40.317 "num_base_bdevs_discovered": 2, 00:13:40.317 "num_base_bdevs_operational": 3, 00:13:40.317 "base_bdevs_list": [ 00:13:40.317 { 00:13:40.317 "name": "BaseBdev1", 00:13:40.317 "uuid": "51737212-a0e9-4139-b096-e74734908a57", 00:13:40.317 "is_configured": true, 00:13:40.317 "data_offset": 0, 00:13:40.317 "data_size": 65536 00:13:40.317 }, 00:13:40.317 { 00:13:40.317 "name": null, 00:13:40.317 "uuid": "987e83e8-bba7-4b7c-8c40-177c5cc06a6c", 00:13:40.317 "is_configured": false, 00:13:40.317 "data_offset": 0, 00:13:40.317 "data_size": 65536 00:13:40.317 }, 00:13:40.317 { 00:13:40.317 "name": "BaseBdev3", 00:13:40.317 "uuid": "24ecfed8-22df-4296-84cc-60fb44b919b4", 00:13:40.317 "is_configured": true, 00:13:40.317 "data_offset": 0, 00:13:40.317 "data_size": 65536 00:13:40.317 } 00:13:40.317 ] 00:13:40.317 }' 00:13:40.317 13:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.317 13:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.577 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:40.577 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.577 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.577 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.577 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.836 [2024-11-20 13:27:22.255331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.836 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.836 "name": "Existed_Raid", 00:13:40.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.836 "strip_size_kb": 64, 00:13:40.836 "state": "configuring", 00:13:40.836 "raid_level": "raid5f", 00:13:40.836 "superblock": false, 00:13:40.836 "num_base_bdevs": 3, 00:13:40.836 "num_base_bdevs_discovered": 1, 00:13:40.836 "num_base_bdevs_operational": 3, 00:13:40.836 "base_bdevs_list": [ 00:13:40.836 { 00:13:40.836 "name": null, 00:13:40.836 "uuid": "51737212-a0e9-4139-b096-e74734908a57", 00:13:40.836 "is_configured": false, 00:13:40.836 "data_offset": 0, 00:13:40.836 "data_size": 65536 00:13:40.836 }, 00:13:40.836 { 00:13:40.836 "name": null, 00:13:40.836 "uuid": "987e83e8-bba7-4b7c-8c40-177c5cc06a6c", 00:13:40.836 "is_configured": false, 00:13:40.836 "data_offset": 0, 00:13:40.836 "data_size": 65536 00:13:40.836 }, 00:13:40.836 { 00:13:40.837 "name": "BaseBdev3", 00:13:40.837 "uuid": "24ecfed8-22df-4296-84cc-60fb44b919b4", 00:13:40.837 "is_configured": true, 00:13:40.837 "data_offset": 0, 00:13:40.837 "data_size": 65536 00:13:40.837 } 00:13:40.837 ] 00:13:40.837 }' 00:13:40.837 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.837 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.096 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:41.096 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.096 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.096 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.096 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.096 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:41.096 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:41.096 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.096 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.355 [2024-11-20 13:27:22.765678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.355 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.356 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.356 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.356 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.356 "name": "Existed_Raid", 00:13:41.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.356 "strip_size_kb": 64, 00:13:41.356 "state": "configuring", 00:13:41.356 "raid_level": "raid5f", 00:13:41.356 "superblock": false, 00:13:41.356 "num_base_bdevs": 3, 00:13:41.356 "num_base_bdevs_discovered": 2, 00:13:41.356 "num_base_bdevs_operational": 3, 00:13:41.356 "base_bdevs_list": [ 00:13:41.356 { 00:13:41.356 "name": null, 00:13:41.356 "uuid": "51737212-a0e9-4139-b096-e74734908a57", 00:13:41.356 "is_configured": false, 00:13:41.356 "data_offset": 0, 00:13:41.356 "data_size": 65536 00:13:41.356 }, 00:13:41.356 { 00:13:41.356 "name": "BaseBdev2", 00:13:41.356 "uuid": "987e83e8-bba7-4b7c-8c40-177c5cc06a6c", 00:13:41.356 "is_configured": true, 00:13:41.356 "data_offset": 0, 00:13:41.356 "data_size": 65536 00:13:41.356 }, 00:13:41.356 { 00:13:41.356 "name": "BaseBdev3", 00:13:41.356 "uuid": "24ecfed8-22df-4296-84cc-60fb44b919b4", 00:13:41.356 "is_configured": true, 00:13:41.356 "data_offset": 0, 00:13:41.356 "data_size": 65536 00:13:41.356 } 00:13:41.356 ] 00:13:41.356 }' 00:13:41.356 13:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.356 13:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.614 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.614 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.614 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.614 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:41.614 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.614 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:41.614 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:41.614 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.614 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.614 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.872 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.872 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 51737212-a0e9-4139-b096-e74734908a57 00:13:41.872 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.872 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.872 [2024-11-20 13:27:23.325273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:41.872 [2024-11-20 13:27:23.325440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:41.872 [2024-11-20 13:27:23.325458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:41.872 [2024-11-20 13:27:23.325753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:41.872 [2024-11-20 13:27:23.326266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:41.872 [2024-11-20 13:27:23.326289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:41.872 [2024-11-20 13:27:23.326540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.872 NewBaseBdev 00:13:41.872 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.872 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:41.872 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.873 [ 00:13:41.873 { 00:13:41.873 "name": "NewBaseBdev", 00:13:41.873 "aliases": [ 00:13:41.873 "51737212-a0e9-4139-b096-e74734908a57" 00:13:41.873 ], 00:13:41.873 "product_name": "Malloc disk", 00:13:41.873 "block_size": 512, 00:13:41.873 "num_blocks": 65536, 00:13:41.873 "uuid": "51737212-a0e9-4139-b096-e74734908a57", 00:13:41.873 "assigned_rate_limits": { 00:13:41.873 "rw_ios_per_sec": 0, 00:13:41.873 "rw_mbytes_per_sec": 0, 00:13:41.873 "r_mbytes_per_sec": 0, 00:13:41.873 "w_mbytes_per_sec": 0 00:13:41.873 }, 00:13:41.873 "claimed": true, 00:13:41.873 "claim_type": "exclusive_write", 00:13:41.873 "zoned": false, 00:13:41.873 "supported_io_types": { 00:13:41.873 "read": true, 00:13:41.873 "write": true, 00:13:41.873 "unmap": true, 00:13:41.873 "flush": true, 00:13:41.873 "reset": true, 00:13:41.873 "nvme_admin": false, 00:13:41.873 "nvme_io": false, 00:13:41.873 "nvme_io_md": false, 00:13:41.873 "write_zeroes": true, 00:13:41.873 "zcopy": true, 00:13:41.873 "get_zone_info": false, 00:13:41.873 "zone_management": false, 00:13:41.873 "zone_append": false, 00:13:41.873 "compare": false, 00:13:41.873 "compare_and_write": false, 00:13:41.873 "abort": true, 00:13:41.873 "seek_hole": false, 00:13:41.873 "seek_data": false, 00:13:41.873 "copy": true, 00:13:41.873 "nvme_iov_md": false 00:13:41.873 }, 00:13:41.873 "memory_domains": [ 00:13:41.873 { 00:13:41.873 "dma_device_id": "system", 00:13:41.873 "dma_device_type": 1 00:13:41.873 }, 00:13:41.873 { 00:13:41.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.873 "dma_device_type": 2 00:13:41.873 } 00:13:41.873 ], 00:13:41.873 "driver_specific": {} 00:13:41.873 } 00:13:41.873 ] 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.873 "name": "Existed_Raid", 00:13:41.873 "uuid": "26d5c505-08c9-4f53-9589-b4e588024967", 00:13:41.873 "strip_size_kb": 64, 00:13:41.873 "state": "online", 00:13:41.873 "raid_level": "raid5f", 00:13:41.873 "superblock": false, 00:13:41.873 "num_base_bdevs": 3, 00:13:41.873 "num_base_bdevs_discovered": 3, 00:13:41.873 "num_base_bdevs_operational": 3, 00:13:41.873 "base_bdevs_list": [ 00:13:41.873 { 00:13:41.873 "name": "NewBaseBdev", 00:13:41.873 "uuid": "51737212-a0e9-4139-b096-e74734908a57", 00:13:41.873 "is_configured": true, 00:13:41.873 "data_offset": 0, 00:13:41.873 "data_size": 65536 00:13:41.873 }, 00:13:41.873 { 00:13:41.873 "name": "BaseBdev2", 00:13:41.873 "uuid": "987e83e8-bba7-4b7c-8c40-177c5cc06a6c", 00:13:41.873 "is_configured": true, 00:13:41.873 "data_offset": 0, 00:13:41.873 "data_size": 65536 00:13:41.873 }, 00:13:41.873 { 00:13:41.873 "name": "BaseBdev3", 00:13:41.873 "uuid": "24ecfed8-22df-4296-84cc-60fb44b919b4", 00:13:41.873 "is_configured": true, 00:13:41.873 "data_offset": 0, 00:13:41.873 "data_size": 65536 00:13:41.873 } 00:13:41.873 ] 00:13:41.873 }' 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.873 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.132 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:42.132 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:42.132 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.132 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.132 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.132 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.132 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.133 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:42.133 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.133 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.133 [2024-11-20 13:27:23.793012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.391 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.391 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:42.391 "name": "Existed_Raid", 00:13:42.391 "aliases": [ 00:13:42.391 "26d5c505-08c9-4f53-9589-b4e588024967" 00:13:42.391 ], 00:13:42.391 "product_name": "Raid Volume", 00:13:42.391 "block_size": 512, 00:13:42.391 "num_blocks": 131072, 00:13:42.391 "uuid": "26d5c505-08c9-4f53-9589-b4e588024967", 00:13:42.391 "assigned_rate_limits": { 00:13:42.392 "rw_ios_per_sec": 0, 00:13:42.392 "rw_mbytes_per_sec": 0, 00:13:42.392 "r_mbytes_per_sec": 0, 00:13:42.392 "w_mbytes_per_sec": 0 00:13:42.392 }, 00:13:42.392 "claimed": false, 00:13:42.392 "zoned": false, 00:13:42.392 "supported_io_types": { 00:13:42.392 "read": true, 00:13:42.392 "write": true, 00:13:42.392 "unmap": false, 00:13:42.392 "flush": false, 00:13:42.392 "reset": true, 00:13:42.392 "nvme_admin": false, 00:13:42.392 "nvme_io": false, 00:13:42.392 "nvme_io_md": false, 00:13:42.392 "write_zeroes": true, 00:13:42.392 "zcopy": false, 00:13:42.392 "get_zone_info": false, 00:13:42.392 "zone_management": false, 00:13:42.392 "zone_append": false, 00:13:42.392 "compare": false, 00:13:42.392 "compare_and_write": false, 00:13:42.392 "abort": false, 00:13:42.392 "seek_hole": false, 00:13:42.392 "seek_data": false, 00:13:42.392 "copy": false, 00:13:42.392 "nvme_iov_md": false 00:13:42.392 }, 00:13:42.392 "driver_specific": { 00:13:42.392 "raid": { 00:13:42.392 "uuid": "26d5c505-08c9-4f53-9589-b4e588024967", 00:13:42.392 "strip_size_kb": 64, 00:13:42.392 "state": "online", 00:13:42.392 "raid_level": "raid5f", 00:13:42.392 "superblock": false, 00:13:42.392 "num_base_bdevs": 3, 00:13:42.392 "num_base_bdevs_discovered": 3, 00:13:42.392 "num_base_bdevs_operational": 3, 00:13:42.392 "base_bdevs_list": [ 00:13:42.392 { 00:13:42.392 "name": "NewBaseBdev", 00:13:42.392 "uuid": "51737212-a0e9-4139-b096-e74734908a57", 00:13:42.392 "is_configured": true, 00:13:42.392 "data_offset": 0, 00:13:42.392 "data_size": 65536 00:13:42.392 }, 00:13:42.392 { 00:13:42.392 "name": "BaseBdev2", 00:13:42.392 "uuid": "987e83e8-bba7-4b7c-8c40-177c5cc06a6c", 00:13:42.392 "is_configured": true, 00:13:42.392 "data_offset": 0, 00:13:42.392 "data_size": 65536 00:13:42.392 }, 00:13:42.392 { 00:13:42.392 "name": "BaseBdev3", 00:13:42.392 "uuid": "24ecfed8-22df-4296-84cc-60fb44b919b4", 00:13:42.392 "is_configured": true, 00:13:42.392 "data_offset": 0, 00:13:42.392 "data_size": 65536 00:13:42.392 } 00:13:42.392 ] 00:13:42.392 } 00:13:42.392 } 00:13:42.392 }' 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:42.392 BaseBdev2 00:13:42.392 BaseBdev3' 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.392 13:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.392 13:27:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.392 13:27:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.392 13:27:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:42.392 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.392 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.392 [2024-11-20 13:27:24.032346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:42.392 [2024-11-20 13:27:24.032470] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.392 [2024-11-20 13:27:24.032597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.392 [2024-11-20 13:27:24.032914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.392 [2024-11-20 13:27:24.032943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:42.392 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.392 13:27:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90195 00:13:42.392 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 90195 ']' 00:13:42.392 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 90195 00:13:42.392 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:42.393 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.393 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90195 00:13:42.651 killing process with pid 90195 00:13:42.651 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.651 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.652 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90195' 00:13:42.652 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 90195 00:13:42.652 [2024-11-20 13:27:24.074059] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.652 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 90195 00:13:42.652 [2024-11-20 13:27:24.107378] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.918 ************************************ 00:13:42.918 END TEST raid5f_state_function_test 00:13:42.918 ************************************ 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:42.918 00:13:42.918 real 0m9.298s 00:13:42.918 user 0m15.942s 00:13:42.918 sys 0m1.749s 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.918 13:27:24 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:42.918 13:27:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:42.918 13:27:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.918 13:27:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.918 ************************************ 00:13:42.918 START TEST raid5f_state_function_test_sb 00:13:42.918 ************************************ 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:42.918 Process raid pid: 90800 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=90800 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90800' 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 90800 00:13:42.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 90800 ']' 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.918 13:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.918 [2024-11-20 13:27:24.492805] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:42.918 [2024-11-20 13:27:24.492966] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.176 [2024-11-20 13:27:24.644263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.176 [2024-11-20 13:27:24.676743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.176 [2024-11-20 13:27:24.723066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.176 [2024-11-20 13:27:24.723111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.111 [2024-11-20 13:27:25.435825] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.111 [2024-11-20 13:27:25.436001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.111 [2024-11-20 13:27:25.436020] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.111 [2024-11-20 13:27:25.436033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.111 [2024-11-20 13:27:25.436041] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.111 [2024-11-20 13:27:25.436054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.111 "name": "Existed_Raid", 00:13:44.111 "uuid": "961b8197-c9b5-4c6a-bc45-3f25518f1c42", 00:13:44.111 "strip_size_kb": 64, 00:13:44.111 "state": "configuring", 00:13:44.111 "raid_level": "raid5f", 00:13:44.111 "superblock": true, 00:13:44.111 "num_base_bdevs": 3, 00:13:44.111 "num_base_bdevs_discovered": 0, 00:13:44.111 "num_base_bdevs_operational": 3, 00:13:44.111 "base_bdevs_list": [ 00:13:44.111 { 00:13:44.111 "name": "BaseBdev1", 00:13:44.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.111 "is_configured": false, 00:13:44.111 "data_offset": 0, 00:13:44.111 "data_size": 0 00:13:44.111 }, 00:13:44.111 { 00:13:44.111 "name": "BaseBdev2", 00:13:44.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.111 "is_configured": false, 00:13:44.111 "data_offset": 0, 00:13:44.111 "data_size": 0 00:13:44.111 }, 00:13:44.111 { 00:13:44.111 "name": "BaseBdev3", 00:13:44.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.111 "is_configured": false, 00:13:44.111 "data_offset": 0, 00:13:44.111 "data_size": 0 00:13:44.111 } 00:13:44.111 ] 00:13:44.111 }' 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.111 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.370 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.370 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.370 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.370 [2024-11-20 13:27:25.891732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.370 [2024-11-20 13:27:25.891804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:44.370 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.370 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.370 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.370 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.370 [2024-11-20 13:27:25.903799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.370 [2024-11-20 13:27:25.903874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.370 [2024-11-20 13:27:25.903888] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.370 [2024-11-20 13:27:25.903899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.370 [2024-11-20 13:27:25.903907] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.370 [2024-11-20 13:27:25.903918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.370 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.370 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.370 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.371 [2024-11-20 13:27:25.925663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.371 BaseBdev1 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.371 [ 00:13:44.371 { 00:13:44.371 "name": "BaseBdev1", 00:13:44.371 "aliases": [ 00:13:44.371 "2da4ccc6-a00d-444d-a6b2-483aa2c782bc" 00:13:44.371 ], 00:13:44.371 "product_name": "Malloc disk", 00:13:44.371 "block_size": 512, 00:13:44.371 "num_blocks": 65536, 00:13:44.371 "uuid": "2da4ccc6-a00d-444d-a6b2-483aa2c782bc", 00:13:44.371 "assigned_rate_limits": { 00:13:44.371 "rw_ios_per_sec": 0, 00:13:44.371 "rw_mbytes_per_sec": 0, 00:13:44.371 "r_mbytes_per_sec": 0, 00:13:44.371 "w_mbytes_per_sec": 0 00:13:44.371 }, 00:13:44.371 "claimed": true, 00:13:44.371 "claim_type": "exclusive_write", 00:13:44.371 "zoned": false, 00:13:44.371 "supported_io_types": { 00:13:44.371 "read": true, 00:13:44.371 "write": true, 00:13:44.371 "unmap": true, 00:13:44.371 "flush": true, 00:13:44.371 "reset": true, 00:13:44.371 "nvme_admin": false, 00:13:44.371 "nvme_io": false, 00:13:44.371 "nvme_io_md": false, 00:13:44.371 "write_zeroes": true, 00:13:44.371 "zcopy": true, 00:13:44.371 "get_zone_info": false, 00:13:44.371 "zone_management": false, 00:13:44.371 "zone_append": false, 00:13:44.371 "compare": false, 00:13:44.371 "compare_and_write": false, 00:13:44.371 "abort": true, 00:13:44.371 "seek_hole": false, 00:13:44.371 "seek_data": false, 00:13:44.371 "copy": true, 00:13:44.371 "nvme_iov_md": false 00:13:44.371 }, 00:13:44.371 "memory_domains": [ 00:13:44.371 { 00:13:44.371 "dma_device_id": "system", 00:13:44.371 "dma_device_type": 1 00:13:44.371 }, 00:13:44.371 { 00:13:44.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.371 "dma_device_type": 2 00:13:44.371 } 00:13:44.371 ], 00:13:44.371 "driver_specific": {} 00:13:44.371 } 00:13:44.371 ] 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.371 13:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.371 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.371 "name": "Existed_Raid", 00:13:44.371 "uuid": "fb70291d-cd4e-4404-9bc8-ada7ae75936b", 00:13:44.371 "strip_size_kb": 64, 00:13:44.371 "state": "configuring", 00:13:44.371 "raid_level": "raid5f", 00:13:44.371 "superblock": true, 00:13:44.371 "num_base_bdevs": 3, 00:13:44.371 "num_base_bdevs_discovered": 1, 00:13:44.371 "num_base_bdevs_operational": 3, 00:13:44.371 "base_bdevs_list": [ 00:13:44.371 { 00:13:44.371 "name": "BaseBdev1", 00:13:44.371 "uuid": "2da4ccc6-a00d-444d-a6b2-483aa2c782bc", 00:13:44.371 "is_configured": true, 00:13:44.371 "data_offset": 2048, 00:13:44.371 "data_size": 63488 00:13:44.371 }, 00:13:44.371 { 00:13:44.371 "name": "BaseBdev2", 00:13:44.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.371 "is_configured": false, 00:13:44.371 "data_offset": 0, 00:13:44.371 "data_size": 0 00:13:44.371 }, 00:13:44.371 { 00:13:44.371 "name": "BaseBdev3", 00:13:44.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.371 "is_configured": false, 00:13:44.371 "data_offset": 0, 00:13:44.371 "data_size": 0 00:13:44.371 } 00:13:44.371 ] 00:13:44.371 }' 00:13:44.371 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.371 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.938 [2024-11-20 13:27:26.385184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.938 [2024-11-20 13:27:26.385339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.938 [2024-11-20 13:27:26.397274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.938 [2024-11-20 13:27:26.399698] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.938 [2024-11-20 13:27:26.399829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.938 [2024-11-20 13:27:26.399870] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.938 [2024-11-20 13:27:26.399914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.938 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.938 "name": "Existed_Raid", 00:13:44.938 "uuid": "80f51f50-b76a-48fb-a8f3-00aef76cb533", 00:13:44.938 "strip_size_kb": 64, 00:13:44.938 "state": "configuring", 00:13:44.938 "raid_level": "raid5f", 00:13:44.938 "superblock": true, 00:13:44.938 "num_base_bdevs": 3, 00:13:44.938 "num_base_bdevs_discovered": 1, 00:13:44.939 "num_base_bdevs_operational": 3, 00:13:44.939 "base_bdevs_list": [ 00:13:44.939 { 00:13:44.939 "name": "BaseBdev1", 00:13:44.939 "uuid": "2da4ccc6-a00d-444d-a6b2-483aa2c782bc", 00:13:44.939 "is_configured": true, 00:13:44.939 "data_offset": 2048, 00:13:44.939 "data_size": 63488 00:13:44.939 }, 00:13:44.939 { 00:13:44.939 "name": "BaseBdev2", 00:13:44.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.939 "is_configured": false, 00:13:44.939 "data_offset": 0, 00:13:44.939 "data_size": 0 00:13:44.939 }, 00:13:44.939 { 00:13:44.939 "name": "BaseBdev3", 00:13:44.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.939 "is_configured": false, 00:13:44.939 "data_offset": 0, 00:13:44.939 "data_size": 0 00:13:44.939 } 00:13:44.939 ] 00:13:44.939 }' 00:13:44.939 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.939 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.196 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:45.196 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.196 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.454 [2024-11-20 13:27:26.864302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.454 BaseBdev2 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.454 [ 00:13:45.454 { 00:13:45.454 "name": "BaseBdev2", 00:13:45.454 "aliases": [ 00:13:45.454 "5cf01a73-6193-40fe-81cf-d2e85ef61e42" 00:13:45.454 ], 00:13:45.454 "product_name": "Malloc disk", 00:13:45.454 "block_size": 512, 00:13:45.454 "num_blocks": 65536, 00:13:45.454 "uuid": "5cf01a73-6193-40fe-81cf-d2e85ef61e42", 00:13:45.454 "assigned_rate_limits": { 00:13:45.454 "rw_ios_per_sec": 0, 00:13:45.454 "rw_mbytes_per_sec": 0, 00:13:45.454 "r_mbytes_per_sec": 0, 00:13:45.454 "w_mbytes_per_sec": 0 00:13:45.454 }, 00:13:45.454 "claimed": true, 00:13:45.454 "claim_type": "exclusive_write", 00:13:45.454 "zoned": false, 00:13:45.454 "supported_io_types": { 00:13:45.454 "read": true, 00:13:45.454 "write": true, 00:13:45.454 "unmap": true, 00:13:45.454 "flush": true, 00:13:45.454 "reset": true, 00:13:45.454 "nvme_admin": false, 00:13:45.454 "nvme_io": false, 00:13:45.454 "nvme_io_md": false, 00:13:45.454 "write_zeroes": true, 00:13:45.454 "zcopy": true, 00:13:45.454 "get_zone_info": false, 00:13:45.454 "zone_management": false, 00:13:45.454 "zone_append": false, 00:13:45.454 "compare": false, 00:13:45.454 "compare_and_write": false, 00:13:45.454 "abort": true, 00:13:45.454 "seek_hole": false, 00:13:45.454 "seek_data": false, 00:13:45.454 "copy": true, 00:13:45.454 "nvme_iov_md": false 00:13:45.454 }, 00:13:45.454 "memory_domains": [ 00:13:45.454 { 00:13:45.454 "dma_device_id": "system", 00:13:45.454 "dma_device_type": 1 00:13:45.454 }, 00:13:45.454 { 00:13:45.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.454 "dma_device_type": 2 00:13:45.454 } 00:13:45.454 ], 00:13:45.454 "driver_specific": {} 00:13:45.454 } 00:13:45.454 ] 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.454 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.455 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.455 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.455 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.455 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.455 "name": "Existed_Raid", 00:13:45.455 "uuid": "80f51f50-b76a-48fb-a8f3-00aef76cb533", 00:13:45.455 "strip_size_kb": 64, 00:13:45.455 "state": "configuring", 00:13:45.455 "raid_level": "raid5f", 00:13:45.455 "superblock": true, 00:13:45.455 "num_base_bdevs": 3, 00:13:45.455 "num_base_bdevs_discovered": 2, 00:13:45.455 "num_base_bdevs_operational": 3, 00:13:45.455 "base_bdevs_list": [ 00:13:45.455 { 00:13:45.455 "name": "BaseBdev1", 00:13:45.455 "uuid": "2da4ccc6-a00d-444d-a6b2-483aa2c782bc", 00:13:45.455 "is_configured": true, 00:13:45.455 "data_offset": 2048, 00:13:45.455 "data_size": 63488 00:13:45.455 }, 00:13:45.455 { 00:13:45.455 "name": "BaseBdev2", 00:13:45.455 "uuid": "5cf01a73-6193-40fe-81cf-d2e85ef61e42", 00:13:45.455 "is_configured": true, 00:13:45.455 "data_offset": 2048, 00:13:45.455 "data_size": 63488 00:13:45.455 }, 00:13:45.455 { 00:13:45.455 "name": "BaseBdev3", 00:13:45.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.455 "is_configured": false, 00:13:45.455 "data_offset": 0, 00:13:45.455 "data_size": 0 00:13:45.455 } 00:13:45.455 ] 00:13:45.455 }' 00:13:45.455 13:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.455 13:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.713 [2024-11-20 13:27:27.348789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.713 [2024-11-20 13:27:27.349232] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:45.713 [2024-11-20 13:27:27.349318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:45.713 BaseBdev3 00:13:45.713 [2024-11-20 13:27:27.349810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:45.713 [2024-11-20 13:27:27.350582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:45.713 [2024-11-20 13:27:27.350665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, ra 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:45.713 id_bdev 0x617000001900 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:45.713 [2024-11-20 13:27:27.350953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.713 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.713 [ 00:13:45.713 { 00:13:45.713 "name": "BaseBdev3", 00:13:45.713 "aliases": [ 00:13:45.713 "39c702a6-509a-44f9-8018-689b214a6a81" 00:13:45.713 ], 00:13:45.713 "product_name": "Malloc disk", 00:13:45.713 "block_size": 512, 00:13:45.713 "num_blocks": 65536, 00:13:45.713 "uuid": "39c702a6-509a-44f9-8018-689b214a6a81", 00:13:45.713 "assigned_rate_limits": { 00:13:45.713 "rw_ios_per_sec": 0, 00:13:45.713 "rw_mbytes_per_sec": 0, 00:13:45.713 "r_mbytes_per_sec": 0, 00:13:45.713 "w_mbytes_per_sec": 0 00:13:45.713 }, 00:13:45.713 "claimed": true, 00:13:45.713 "claim_type": "exclusive_write", 00:13:45.713 "zoned": false, 00:13:45.713 "supported_io_types": { 00:13:45.713 "read": true, 00:13:45.713 "write": true, 00:13:45.713 "unmap": true, 00:13:45.713 "flush": true, 00:13:45.713 "reset": true, 00:13:45.713 "nvme_admin": false, 00:13:45.713 "nvme_io": false, 00:13:45.713 "nvme_io_md": false, 00:13:45.713 "write_zeroes": true, 00:13:45.713 "zcopy": true, 00:13:45.713 "get_zone_info": false, 00:13:45.713 "zone_management": false, 00:13:45.713 "zone_append": false, 00:13:45.713 "compare": false, 00:13:45.713 "compare_and_write": false, 00:13:45.713 "abort": true, 00:13:45.713 "seek_hole": false, 00:13:45.713 "seek_data": false, 00:13:45.713 "copy": true, 00:13:45.978 "nvme_iov_md": false 00:13:45.978 }, 00:13:45.978 "memory_domains": [ 00:13:45.978 { 00:13:45.978 "dma_device_id": "system", 00:13:45.978 "dma_device_type": 1 00:13:45.978 }, 00:13:45.978 { 00:13:45.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.978 "dma_device_type": 2 00:13:45.978 } 00:13:45.978 ], 00:13:45.978 "driver_specific": {} 00:13:45.978 } 00:13:45.978 ] 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.978 "name": "Existed_Raid", 00:13:45.978 "uuid": "80f51f50-b76a-48fb-a8f3-00aef76cb533", 00:13:45.978 "strip_size_kb": 64, 00:13:45.978 "state": "online", 00:13:45.978 "raid_level": "raid5f", 00:13:45.978 "superblock": true, 00:13:45.978 "num_base_bdevs": 3, 00:13:45.978 "num_base_bdevs_discovered": 3, 00:13:45.978 "num_base_bdevs_operational": 3, 00:13:45.978 "base_bdevs_list": [ 00:13:45.978 { 00:13:45.978 "name": "BaseBdev1", 00:13:45.978 "uuid": "2da4ccc6-a00d-444d-a6b2-483aa2c782bc", 00:13:45.978 "is_configured": true, 00:13:45.978 "data_offset": 2048, 00:13:45.978 "data_size": 63488 00:13:45.978 }, 00:13:45.978 { 00:13:45.978 "name": "BaseBdev2", 00:13:45.978 "uuid": "5cf01a73-6193-40fe-81cf-d2e85ef61e42", 00:13:45.978 "is_configured": true, 00:13:45.978 "data_offset": 2048, 00:13:45.978 "data_size": 63488 00:13:45.978 }, 00:13:45.978 { 00:13:45.978 "name": "BaseBdev3", 00:13:45.978 "uuid": "39c702a6-509a-44f9-8018-689b214a6a81", 00:13:45.978 "is_configured": true, 00:13:45.978 "data_offset": 2048, 00:13:45.978 "data_size": 63488 00:13:45.978 } 00:13:45.978 ] 00:13:45.978 }' 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.978 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.297 [2024-11-20 13:27:27.868372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:46.297 "name": "Existed_Raid", 00:13:46.297 "aliases": [ 00:13:46.297 "80f51f50-b76a-48fb-a8f3-00aef76cb533" 00:13:46.297 ], 00:13:46.297 "product_name": "Raid Volume", 00:13:46.297 "block_size": 512, 00:13:46.297 "num_blocks": 126976, 00:13:46.297 "uuid": "80f51f50-b76a-48fb-a8f3-00aef76cb533", 00:13:46.297 "assigned_rate_limits": { 00:13:46.297 "rw_ios_per_sec": 0, 00:13:46.297 "rw_mbytes_per_sec": 0, 00:13:46.297 "r_mbytes_per_sec": 0, 00:13:46.297 "w_mbytes_per_sec": 0 00:13:46.297 }, 00:13:46.297 "claimed": false, 00:13:46.297 "zoned": false, 00:13:46.297 "supported_io_types": { 00:13:46.297 "read": true, 00:13:46.297 "write": true, 00:13:46.297 "unmap": false, 00:13:46.297 "flush": false, 00:13:46.297 "reset": true, 00:13:46.297 "nvme_admin": false, 00:13:46.297 "nvme_io": false, 00:13:46.297 "nvme_io_md": false, 00:13:46.297 "write_zeroes": true, 00:13:46.297 "zcopy": false, 00:13:46.297 "get_zone_info": false, 00:13:46.297 "zone_management": false, 00:13:46.297 "zone_append": false, 00:13:46.297 "compare": false, 00:13:46.297 "compare_and_write": false, 00:13:46.297 "abort": false, 00:13:46.297 "seek_hole": false, 00:13:46.297 "seek_data": false, 00:13:46.297 "copy": false, 00:13:46.297 "nvme_iov_md": false 00:13:46.297 }, 00:13:46.297 "driver_specific": { 00:13:46.297 "raid": { 00:13:46.297 "uuid": "80f51f50-b76a-48fb-a8f3-00aef76cb533", 00:13:46.297 "strip_size_kb": 64, 00:13:46.297 "state": "online", 00:13:46.297 "raid_level": "raid5f", 00:13:46.297 "superblock": true, 00:13:46.297 "num_base_bdevs": 3, 00:13:46.297 "num_base_bdevs_discovered": 3, 00:13:46.297 "num_base_bdevs_operational": 3, 00:13:46.297 "base_bdevs_list": [ 00:13:46.297 { 00:13:46.297 "name": "BaseBdev1", 00:13:46.297 "uuid": "2da4ccc6-a00d-444d-a6b2-483aa2c782bc", 00:13:46.297 "is_configured": true, 00:13:46.297 "data_offset": 2048, 00:13:46.297 "data_size": 63488 00:13:46.297 }, 00:13:46.297 { 00:13:46.297 "name": "BaseBdev2", 00:13:46.297 "uuid": "5cf01a73-6193-40fe-81cf-d2e85ef61e42", 00:13:46.297 "is_configured": true, 00:13:46.297 "data_offset": 2048, 00:13:46.297 "data_size": 63488 00:13:46.297 }, 00:13:46.297 { 00:13:46.297 "name": "BaseBdev3", 00:13:46.297 "uuid": "39c702a6-509a-44f9-8018-689b214a6a81", 00:13:46.297 "is_configured": true, 00:13:46.297 "data_offset": 2048, 00:13:46.297 "data_size": 63488 00:13:46.297 } 00:13:46.297 ] 00:13:46.297 } 00:13:46.297 } 00:13:46.297 }' 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:46.297 BaseBdev2 00:13:46.297 BaseBdev3' 00:13:46.297 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.557 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:46.557 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.557 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.557 13:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:46.557 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.557 13:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.557 [2024-11-20 13:27:28.131780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.557 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.558 "name": "Existed_Raid", 00:13:46.558 "uuid": "80f51f50-b76a-48fb-a8f3-00aef76cb533", 00:13:46.558 "strip_size_kb": 64, 00:13:46.558 "state": "online", 00:13:46.558 "raid_level": "raid5f", 00:13:46.558 "superblock": true, 00:13:46.558 "num_base_bdevs": 3, 00:13:46.558 "num_base_bdevs_discovered": 2, 00:13:46.558 "num_base_bdevs_operational": 2, 00:13:46.558 "base_bdevs_list": [ 00:13:46.558 { 00:13:46.558 "name": null, 00:13:46.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.558 "is_configured": false, 00:13:46.558 "data_offset": 0, 00:13:46.558 "data_size": 63488 00:13:46.558 }, 00:13:46.558 { 00:13:46.558 "name": "BaseBdev2", 00:13:46.558 "uuid": "5cf01a73-6193-40fe-81cf-d2e85ef61e42", 00:13:46.558 "is_configured": true, 00:13:46.558 "data_offset": 2048, 00:13:46.558 "data_size": 63488 00:13:46.558 }, 00:13:46.558 { 00:13:46.558 "name": "BaseBdev3", 00:13:46.558 "uuid": "39c702a6-509a-44f9-8018-689b214a6a81", 00:13:46.558 "is_configured": true, 00:13:46.558 "data_offset": 2048, 00:13:46.558 "data_size": 63488 00:13:46.558 } 00:13:46.558 ] 00:13:46.558 }' 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.558 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.125 [2024-11-20 13:27:28.647793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:47.125 [2024-11-20 13:27:28.648022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.125 [2024-11-20 13:27:28.660046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.125 [2024-11-20 13:27:28.716065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:47.125 [2024-11-20 13:27:28.716138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.125 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.385 BaseBdev2 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.385 [ 00:13:47.385 { 00:13:47.385 "name": "BaseBdev2", 00:13:47.385 "aliases": [ 00:13:47.385 "6cc498e6-1b48-4ebe-8c92-b8fff647e8eb" 00:13:47.385 ], 00:13:47.385 "product_name": "Malloc disk", 00:13:47.385 "block_size": 512, 00:13:47.385 "num_blocks": 65536, 00:13:47.385 "uuid": "6cc498e6-1b48-4ebe-8c92-b8fff647e8eb", 00:13:47.385 "assigned_rate_limits": { 00:13:47.385 "rw_ios_per_sec": 0, 00:13:47.385 "rw_mbytes_per_sec": 0, 00:13:47.385 "r_mbytes_per_sec": 0, 00:13:47.385 "w_mbytes_per_sec": 0 00:13:47.385 }, 00:13:47.385 "claimed": false, 00:13:47.385 "zoned": false, 00:13:47.385 "supported_io_types": { 00:13:47.385 "read": true, 00:13:47.385 "write": true, 00:13:47.385 "unmap": true, 00:13:47.385 "flush": true, 00:13:47.385 "reset": true, 00:13:47.385 "nvme_admin": false, 00:13:47.385 "nvme_io": false, 00:13:47.385 "nvme_io_md": false, 00:13:47.385 "write_zeroes": true, 00:13:47.385 "zcopy": true, 00:13:47.385 "get_zone_info": false, 00:13:47.385 "zone_management": false, 00:13:47.385 "zone_append": false, 00:13:47.385 "compare": false, 00:13:47.385 "compare_and_write": false, 00:13:47.385 "abort": true, 00:13:47.385 "seek_hole": false, 00:13:47.385 "seek_data": false, 00:13:47.385 "copy": true, 00:13:47.385 "nvme_iov_md": false 00:13:47.385 }, 00:13:47.385 "memory_domains": [ 00:13:47.385 { 00:13:47.385 "dma_device_id": "system", 00:13:47.385 "dma_device_type": 1 00:13:47.385 }, 00:13:47.385 { 00:13:47.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.385 "dma_device_type": 2 00:13:47.385 } 00:13:47.385 ], 00:13:47.385 "driver_specific": {} 00:13:47.385 } 00:13:47.385 ] 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.385 BaseBdev3 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.385 [ 00:13:47.385 { 00:13:47.385 "name": "BaseBdev3", 00:13:47.385 "aliases": [ 00:13:47.385 "0fe8419a-9efd-4803-b82e-7182821efe97" 00:13:47.385 ], 00:13:47.385 "product_name": "Malloc disk", 00:13:47.385 "block_size": 512, 00:13:47.385 "num_blocks": 65536, 00:13:47.385 "uuid": "0fe8419a-9efd-4803-b82e-7182821efe97", 00:13:47.385 "assigned_rate_limits": { 00:13:47.385 "rw_ios_per_sec": 0, 00:13:47.385 "rw_mbytes_per_sec": 0, 00:13:47.385 "r_mbytes_per_sec": 0, 00:13:47.385 "w_mbytes_per_sec": 0 00:13:47.385 }, 00:13:47.385 "claimed": false, 00:13:47.385 "zoned": false, 00:13:47.385 "supported_io_types": { 00:13:47.385 "read": true, 00:13:47.385 "write": true, 00:13:47.385 "unmap": true, 00:13:47.385 "flush": true, 00:13:47.385 "reset": true, 00:13:47.385 "nvme_admin": false, 00:13:47.385 "nvme_io": false, 00:13:47.385 "nvme_io_md": false, 00:13:47.385 "write_zeroes": true, 00:13:47.385 "zcopy": true, 00:13:47.385 "get_zone_info": false, 00:13:47.385 "zone_management": false, 00:13:47.385 "zone_append": false, 00:13:47.385 "compare": false, 00:13:47.385 "compare_and_write": false, 00:13:47.385 "abort": true, 00:13:47.385 "seek_hole": false, 00:13:47.385 "seek_data": false, 00:13:47.385 "copy": true, 00:13:47.385 "nvme_iov_md": false 00:13:47.385 }, 00:13:47.385 "memory_domains": [ 00:13:47.385 { 00:13:47.385 "dma_device_id": "system", 00:13:47.385 "dma_device_type": 1 00:13:47.385 }, 00:13:47.385 { 00:13:47.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.385 "dma_device_type": 2 00:13:47.385 } 00:13:47.385 ], 00:13:47.385 "driver_specific": {} 00:13:47.385 } 00:13:47.385 ] 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:47.385 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.386 [2024-11-20 13:27:28.883946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:47.386 [2024-11-20 13:27:28.884138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:47.386 [2024-11-20 13:27:28.884213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.386 [2024-11-20 13:27:28.886608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.386 "name": "Existed_Raid", 00:13:47.386 "uuid": "5ce90eee-e4c6-4725-95f8-fc7d7a4f26cc", 00:13:47.386 "strip_size_kb": 64, 00:13:47.386 "state": "configuring", 00:13:47.386 "raid_level": "raid5f", 00:13:47.386 "superblock": true, 00:13:47.386 "num_base_bdevs": 3, 00:13:47.386 "num_base_bdevs_discovered": 2, 00:13:47.386 "num_base_bdevs_operational": 3, 00:13:47.386 "base_bdevs_list": [ 00:13:47.386 { 00:13:47.386 "name": "BaseBdev1", 00:13:47.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.386 "is_configured": false, 00:13:47.386 "data_offset": 0, 00:13:47.386 "data_size": 0 00:13:47.386 }, 00:13:47.386 { 00:13:47.386 "name": "BaseBdev2", 00:13:47.386 "uuid": "6cc498e6-1b48-4ebe-8c92-b8fff647e8eb", 00:13:47.386 "is_configured": true, 00:13:47.386 "data_offset": 2048, 00:13:47.386 "data_size": 63488 00:13:47.386 }, 00:13:47.386 { 00:13:47.386 "name": "BaseBdev3", 00:13:47.386 "uuid": "0fe8419a-9efd-4803-b82e-7182821efe97", 00:13:47.386 "is_configured": true, 00:13:47.386 "data_offset": 2048, 00:13:47.386 "data_size": 63488 00:13:47.386 } 00:13:47.386 ] 00:13:47.386 }' 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.386 13:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.953 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:47.953 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.953 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.953 [2024-11-20 13:27:29.363725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:47.953 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.953 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.953 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.953 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.954 "name": "Existed_Raid", 00:13:47.954 "uuid": "5ce90eee-e4c6-4725-95f8-fc7d7a4f26cc", 00:13:47.954 "strip_size_kb": 64, 00:13:47.954 "state": "configuring", 00:13:47.954 "raid_level": "raid5f", 00:13:47.954 "superblock": true, 00:13:47.954 "num_base_bdevs": 3, 00:13:47.954 "num_base_bdevs_discovered": 1, 00:13:47.954 "num_base_bdevs_operational": 3, 00:13:47.954 "base_bdevs_list": [ 00:13:47.954 { 00:13:47.954 "name": "BaseBdev1", 00:13:47.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.954 "is_configured": false, 00:13:47.954 "data_offset": 0, 00:13:47.954 "data_size": 0 00:13:47.954 }, 00:13:47.954 { 00:13:47.954 "name": null, 00:13:47.954 "uuid": "6cc498e6-1b48-4ebe-8c92-b8fff647e8eb", 00:13:47.954 "is_configured": false, 00:13:47.954 "data_offset": 0, 00:13:47.954 "data_size": 63488 00:13:47.954 }, 00:13:47.954 { 00:13:47.954 "name": "BaseBdev3", 00:13:47.954 "uuid": "0fe8419a-9efd-4803-b82e-7182821efe97", 00:13:47.954 "is_configured": true, 00:13:47.954 "data_offset": 2048, 00:13:47.954 "data_size": 63488 00:13:47.954 } 00:13:47.954 ] 00:13:47.954 }' 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.954 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.212 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.212 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:48.212 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.212 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.212 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.472 [2024-11-20 13:27:29.899332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.472 BaseBdev1 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.472 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.472 [ 00:13:48.472 { 00:13:48.472 "name": "BaseBdev1", 00:13:48.472 "aliases": [ 00:13:48.472 "31d23704-e57a-4985-bcdc-2bf19e5323b3" 00:13:48.472 ], 00:13:48.472 "product_name": "Malloc disk", 00:13:48.472 "block_size": 512, 00:13:48.472 "num_blocks": 65536, 00:13:48.472 "uuid": "31d23704-e57a-4985-bcdc-2bf19e5323b3", 00:13:48.472 "assigned_rate_limits": { 00:13:48.472 "rw_ios_per_sec": 0, 00:13:48.472 "rw_mbytes_per_sec": 0, 00:13:48.472 "r_mbytes_per_sec": 0, 00:13:48.472 "w_mbytes_per_sec": 0 00:13:48.472 }, 00:13:48.472 "claimed": true, 00:13:48.472 "claim_type": "exclusive_write", 00:13:48.472 "zoned": false, 00:13:48.472 "supported_io_types": { 00:13:48.472 "read": true, 00:13:48.472 "write": true, 00:13:48.472 "unmap": true, 00:13:48.472 "flush": true, 00:13:48.472 "reset": true, 00:13:48.472 "nvme_admin": false, 00:13:48.472 "nvme_io": false, 00:13:48.472 "nvme_io_md": false, 00:13:48.472 "write_zeroes": true, 00:13:48.473 "zcopy": true, 00:13:48.473 "get_zone_info": false, 00:13:48.473 "zone_management": false, 00:13:48.473 "zone_append": false, 00:13:48.473 "compare": false, 00:13:48.473 "compare_and_write": false, 00:13:48.473 "abort": true, 00:13:48.473 "seek_hole": false, 00:13:48.473 "seek_data": false, 00:13:48.473 "copy": true, 00:13:48.473 "nvme_iov_md": false 00:13:48.473 }, 00:13:48.473 "memory_domains": [ 00:13:48.473 { 00:13:48.473 "dma_device_id": "system", 00:13:48.473 "dma_device_type": 1 00:13:48.473 }, 00:13:48.473 { 00:13:48.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.473 "dma_device_type": 2 00:13:48.473 } 00:13:48.473 ], 00:13:48.473 "driver_specific": {} 00:13:48.473 } 00:13:48.473 ] 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.473 "name": "Existed_Raid", 00:13:48.473 "uuid": "5ce90eee-e4c6-4725-95f8-fc7d7a4f26cc", 00:13:48.473 "strip_size_kb": 64, 00:13:48.473 "state": "configuring", 00:13:48.473 "raid_level": "raid5f", 00:13:48.473 "superblock": true, 00:13:48.473 "num_base_bdevs": 3, 00:13:48.473 "num_base_bdevs_discovered": 2, 00:13:48.473 "num_base_bdevs_operational": 3, 00:13:48.473 "base_bdevs_list": [ 00:13:48.473 { 00:13:48.473 "name": "BaseBdev1", 00:13:48.473 "uuid": "31d23704-e57a-4985-bcdc-2bf19e5323b3", 00:13:48.473 "is_configured": true, 00:13:48.473 "data_offset": 2048, 00:13:48.473 "data_size": 63488 00:13:48.473 }, 00:13:48.473 { 00:13:48.473 "name": null, 00:13:48.473 "uuid": "6cc498e6-1b48-4ebe-8c92-b8fff647e8eb", 00:13:48.473 "is_configured": false, 00:13:48.473 "data_offset": 0, 00:13:48.473 "data_size": 63488 00:13:48.473 }, 00:13:48.473 { 00:13:48.473 "name": "BaseBdev3", 00:13:48.473 "uuid": "0fe8419a-9efd-4803-b82e-7182821efe97", 00:13:48.473 "is_configured": true, 00:13:48.473 "data_offset": 2048, 00:13:48.473 "data_size": 63488 00:13:48.473 } 00:13:48.473 ] 00:13:48.473 }' 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.473 13:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.732 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.732 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:48.732 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.732 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.990 [2024-11-20 13:27:30.442635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.990 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.990 "name": "Existed_Raid", 00:13:48.990 "uuid": "5ce90eee-e4c6-4725-95f8-fc7d7a4f26cc", 00:13:48.990 "strip_size_kb": 64, 00:13:48.990 "state": "configuring", 00:13:48.990 "raid_level": "raid5f", 00:13:48.990 "superblock": true, 00:13:48.990 "num_base_bdevs": 3, 00:13:48.990 "num_base_bdevs_discovered": 1, 00:13:48.990 "num_base_bdevs_operational": 3, 00:13:48.990 "base_bdevs_list": [ 00:13:48.990 { 00:13:48.990 "name": "BaseBdev1", 00:13:48.990 "uuid": "31d23704-e57a-4985-bcdc-2bf19e5323b3", 00:13:48.990 "is_configured": true, 00:13:48.990 "data_offset": 2048, 00:13:48.990 "data_size": 63488 00:13:48.990 }, 00:13:48.990 { 00:13:48.991 "name": null, 00:13:48.991 "uuid": "6cc498e6-1b48-4ebe-8c92-b8fff647e8eb", 00:13:48.991 "is_configured": false, 00:13:48.991 "data_offset": 0, 00:13:48.991 "data_size": 63488 00:13:48.991 }, 00:13:48.991 { 00:13:48.991 "name": null, 00:13:48.991 "uuid": "0fe8419a-9efd-4803-b82e-7182821efe97", 00:13:48.991 "is_configured": false, 00:13:48.991 "data_offset": 0, 00:13:48.991 "data_size": 63488 00:13:48.991 } 00:13:48.991 ] 00:13:48.991 }' 00:13:48.991 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.991 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.249 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:49.249 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.249 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.508 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.508 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.508 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:49.508 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:49.508 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.508 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.508 [2024-11-20 13:27:30.973783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.509 13:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.509 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.509 "name": "Existed_Raid", 00:13:49.509 "uuid": "5ce90eee-e4c6-4725-95f8-fc7d7a4f26cc", 00:13:49.509 "strip_size_kb": 64, 00:13:49.509 "state": "configuring", 00:13:49.509 "raid_level": "raid5f", 00:13:49.509 "superblock": true, 00:13:49.509 "num_base_bdevs": 3, 00:13:49.509 "num_base_bdevs_discovered": 2, 00:13:49.509 "num_base_bdevs_operational": 3, 00:13:49.509 "base_bdevs_list": [ 00:13:49.509 { 00:13:49.509 "name": "BaseBdev1", 00:13:49.509 "uuid": "31d23704-e57a-4985-bcdc-2bf19e5323b3", 00:13:49.509 "is_configured": true, 00:13:49.509 "data_offset": 2048, 00:13:49.509 "data_size": 63488 00:13:49.509 }, 00:13:49.509 { 00:13:49.509 "name": null, 00:13:49.509 "uuid": "6cc498e6-1b48-4ebe-8c92-b8fff647e8eb", 00:13:49.509 "is_configured": false, 00:13:49.509 "data_offset": 0, 00:13:49.509 "data_size": 63488 00:13:49.509 }, 00:13:49.509 { 00:13:49.509 "name": "BaseBdev3", 00:13:49.509 "uuid": "0fe8419a-9efd-4803-b82e-7182821efe97", 00:13:49.509 "is_configured": true, 00:13:49.509 "data_offset": 2048, 00:13:49.509 "data_size": 63488 00:13:49.509 } 00:13:49.509 ] 00:13:49.509 }' 00:13:49.509 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.509 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.767 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.767 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:49.767 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.767 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.767 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.767 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:49.767 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:49.767 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.767 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.767 [2024-11-20 13:27:31.425149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.028 "name": "Existed_Raid", 00:13:50.028 "uuid": "5ce90eee-e4c6-4725-95f8-fc7d7a4f26cc", 00:13:50.028 "strip_size_kb": 64, 00:13:50.028 "state": "configuring", 00:13:50.028 "raid_level": "raid5f", 00:13:50.028 "superblock": true, 00:13:50.028 "num_base_bdevs": 3, 00:13:50.028 "num_base_bdevs_discovered": 1, 00:13:50.028 "num_base_bdevs_operational": 3, 00:13:50.028 "base_bdevs_list": [ 00:13:50.028 { 00:13:50.028 "name": null, 00:13:50.028 "uuid": "31d23704-e57a-4985-bcdc-2bf19e5323b3", 00:13:50.028 "is_configured": false, 00:13:50.028 "data_offset": 0, 00:13:50.028 "data_size": 63488 00:13:50.028 }, 00:13:50.028 { 00:13:50.028 "name": null, 00:13:50.028 "uuid": "6cc498e6-1b48-4ebe-8c92-b8fff647e8eb", 00:13:50.028 "is_configured": false, 00:13:50.028 "data_offset": 0, 00:13:50.028 "data_size": 63488 00:13:50.028 }, 00:13:50.028 { 00:13:50.028 "name": "BaseBdev3", 00:13:50.028 "uuid": "0fe8419a-9efd-4803-b82e-7182821efe97", 00:13:50.028 "is_configured": true, 00:13:50.028 "data_offset": 2048, 00:13:50.028 "data_size": 63488 00:13:50.028 } 00:13:50.028 ] 00:13:50.028 }' 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.028 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.286 [2024-11-20 13:27:31.943816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.286 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.546 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.546 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.546 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.546 "name": "Existed_Raid", 00:13:50.546 "uuid": "5ce90eee-e4c6-4725-95f8-fc7d7a4f26cc", 00:13:50.546 "strip_size_kb": 64, 00:13:50.546 "state": "configuring", 00:13:50.546 "raid_level": "raid5f", 00:13:50.546 "superblock": true, 00:13:50.546 "num_base_bdevs": 3, 00:13:50.546 "num_base_bdevs_discovered": 2, 00:13:50.546 "num_base_bdevs_operational": 3, 00:13:50.546 "base_bdevs_list": [ 00:13:50.546 { 00:13:50.546 "name": null, 00:13:50.546 "uuid": "31d23704-e57a-4985-bcdc-2bf19e5323b3", 00:13:50.546 "is_configured": false, 00:13:50.546 "data_offset": 0, 00:13:50.546 "data_size": 63488 00:13:50.546 }, 00:13:50.546 { 00:13:50.546 "name": "BaseBdev2", 00:13:50.546 "uuid": "6cc498e6-1b48-4ebe-8c92-b8fff647e8eb", 00:13:50.546 "is_configured": true, 00:13:50.546 "data_offset": 2048, 00:13:50.546 "data_size": 63488 00:13:50.546 }, 00:13:50.546 { 00:13:50.546 "name": "BaseBdev3", 00:13:50.546 "uuid": "0fe8419a-9efd-4803-b82e-7182821efe97", 00:13:50.546 "is_configured": true, 00:13:50.546 "data_offset": 2048, 00:13:50.546 "data_size": 63488 00:13:50.546 } 00:13:50.546 ] 00:13:50.546 }' 00:13:50.546 13:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.546 13:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.805 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:50.805 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.805 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.805 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.805 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.805 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:50.805 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:50.805 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.805 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.805 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.805 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 31d23704-e57a-4985-bcdc-2bf19e5323b3 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.064 [2024-11-20 13:27:32.503348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:51.064 [2024-11-20 13:27:32.503604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:51.064 [2024-11-20 13:27:32.503631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:51.064 NewBaseBdev 00:13:51.064 [2024-11-20 13:27:32.503981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.064 [2024-11-20 13:27:32.504504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:51.064 [2024-11-20 13:27:32.504520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:51.064 [2024-11-20 13:27:32.504661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.064 [ 00:13:51.064 { 00:13:51.064 "name": "NewBaseBdev", 00:13:51.064 "aliases": [ 00:13:51.064 "31d23704-e57a-4985-bcdc-2bf19e5323b3" 00:13:51.064 ], 00:13:51.064 "product_name": "Malloc disk", 00:13:51.064 "block_size": 512, 00:13:51.064 "num_blocks": 65536, 00:13:51.064 "uuid": "31d23704-e57a-4985-bcdc-2bf19e5323b3", 00:13:51.064 "assigned_rate_limits": { 00:13:51.064 "rw_ios_per_sec": 0, 00:13:51.064 "rw_mbytes_per_sec": 0, 00:13:51.064 "r_mbytes_per_sec": 0, 00:13:51.064 "w_mbytes_per_sec": 0 00:13:51.064 }, 00:13:51.064 "claimed": true, 00:13:51.064 "claim_type": "exclusive_write", 00:13:51.064 "zoned": false, 00:13:51.064 "supported_io_types": { 00:13:51.064 "read": true, 00:13:51.064 "write": true, 00:13:51.064 "unmap": true, 00:13:51.064 "flush": true, 00:13:51.064 "reset": true, 00:13:51.064 "nvme_admin": false, 00:13:51.064 "nvme_io": false, 00:13:51.064 "nvme_io_md": false, 00:13:51.064 "write_zeroes": true, 00:13:51.064 "zcopy": true, 00:13:51.064 "get_zone_info": false, 00:13:51.064 "zone_management": false, 00:13:51.064 "zone_append": false, 00:13:51.064 "compare": false, 00:13:51.064 "compare_and_write": false, 00:13:51.064 "abort": true, 00:13:51.064 "seek_hole": false, 00:13:51.064 "seek_data": false, 00:13:51.064 "copy": true, 00:13:51.064 "nvme_iov_md": false 00:13:51.064 }, 00:13:51.064 "memory_domains": [ 00:13:51.064 { 00:13:51.064 "dma_device_id": "system", 00:13:51.064 "dma_device_type": 1 00:13:51.064 }, 00:13:51.064 { 00:13:51.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.064 "dma_device_type": 2 00:13:51.064 } 00:13:51.064 ], 00:13:51.064 "driver_specific": {} 00:13:51.064 } 00:13:51.064 ] 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.064 "name": "Existed_Raid", 00:13:51.064 "uuid": "5ce90eee-e4c6-4725-95f8-fc7d7a4f26cc", 00:13:51.064 "strip_size_kb": 64, 00:13:51.064 "state": "online", 00:13:51.064 "raid_level": "raid5f", 00:13:51.064 "superblock": true, 00:13:51.064 "num_base_bdevs": 3, 00:13:51.064 "num_base_bdevs_discovered": 3, 00:13:51.064 "num_base_bdevs_operational": 3, 00:13:51.064 "base_bdevs_list": [ 00:13:51.064 { 00:13:51.064 "name": "NewBaseBdev", 00:13:51.064 "uuid": "31d23704-e57a-4985-bcdc-2bf19e5323b3", 00:13:51.064 "is_configured": true, 00:13:51.064 "data_offset": 2048, 00:13:51.064 "data_size": 63488 00:13:51.064 }, 00:13:51.064 { 00:13:51.064 "name": "BaseBdev2", 00:13:51.064 "uuid": "6cc498e6-1b48-4ebe-8c92-b8fff647e8eb", 00:13:51.064 "is_configured": true, 00:13:51.064 "data_offset": 2048, 00:13:51.064 "data_size": 63488 00:13:51.064 }, 00:13:51.064 { 00:13:51.064 "name": "BaseBdev3", 00:13:51.064 "uuid": "0fe8419a-9efd-4803-b82e-7182821efe97", 00:13:51.064 "is_configured": true, 00:13:51.064 "data_offset": 2048, 00:13:51.064 "data_size": 63488 00:13:51.064 } 00:13:51.064 ] 00:13:51.064 }' 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.064 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.324 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:51.324 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:51.324 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:51.324 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:51.324 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:51.324 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:51.324 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:51.324 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.324 13:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:51.324 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.324 [2024-11-20 13:27:32.971108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.324 13:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:51.583 "name": "Existed_Raid", 00:13:51.583 "aliases": [ 00:13:51.583 "5ce90eee-e4c6-4725-95f8-fc7d7a4f26cc" 00:13:51.583 ], 00:13:51.583 "product_name": "Raid Volume", 00:13:51.583 "block_size": 512, 00:13:51.583 "num_blocks": 126976, 00:13:51.583 "uuid": "5ce90eee-e4c6-4725-95f8-fc7d7a4f26cc", 00:13:51.583 "assigned_rate_limits": { 00:13:51.583 "rw_ios_per_sec": 0, 00:13:51.583 "rw_mbytes_per_sec": 0, 00:13:51.583 "r_mbytes_per_sec": 0, 00:13:51.583 "w_mbytes_per_sec": 0 00:13:51.583 }, 00:13:51.583 "claimed": false, 00:13:51.583 "zoned": false, 00:13:51.583 "supported_io_types": { 00:13:51.583 "read": true, 00:13:51.583 "write": true, 00:13:51.583 "unmap": false, 00:13:51.583 "flush": false, 00:13:51.583 "reset": true, 00:13:51.583 "nvme_admin": false, 00:13:51.583 "nvme_io": false, 00:13:51.583 "nvme_io_md": false, 00:13:51.583 "write_zeroes": true, 00:13:51.583 "zcopy": false, 00:13:51.583 "get_zone_info": false, 00:13:51.583 "zone_management": false, 00:13:51.583 "zone_append": false, 00:13:51.583 "compare": false, 00:13:51.583 "compare_and_write": false, 00:13:51.583 "abort": false, 00:13:51.583 "seek_hole": false, 00:13:51.583 "seek_data": false, 00:13:51.583 "copy": false, 00:13:51.583 "nvme_iov_md": false 00:13:51.583 }, 00:13:51.583 "driver_specific": { 00:13:51.583 "raid": { 00:13:51.583 "uuid": "5ce90eee-e4c6-4725-95f8-fc7d7a4f26cc", 00:13:51.583 "strip_size_kb": 64, 00:13:51.583 "state": "online", 00:13:51.583 "raid_level": "raid5f", 00:13:51.583 "superblock": true, 00:13:51.583 "num_base_bdevs": 3, 00:13:51.583 "num_base_bdevs_discovered": 3, 00:13:51.583 "num_base_bdevs_operational": 3, 00:13:51.583 "base_bdevs_list": [ 00:13:51.583 { 00:13:51.583 "name": "NewBaseBdev", 00:13:51.583 "uuid": "31d23704-e57a-4985-bcdc-2bf19e5323b3", 00:13:51.583 "is_configured": true, 00:13:51.583 "data_offset": 2048, 00:13:51.583 "data_size": 63488 00:13:51.583 }, 00:13:51.583 { 00:13:51.583 "name": "BaseBdev2", 00:13:51.583 "uuid": "6cc498e6-1b48-4ebe-8c92-b8fff647e8eb", 00:13:51.583 "is_configured": true, 00:13:51.583 "data_offset": 2048, 00:13:51.583 "data_size": 63488 00:13:51.583 }, 00:13:51.583 { 00:13:51.583 "name": "BaseBdev3", 00:13:51.583 "uuid": "0fe8419a-9efd-4803-b82e-7182821efe97", 00:13:51.583 "is_configured": true, 00:13:51.583 "data_offset": 2048, 00:13:51.583 "data_size": 63488 00:13:51.583 } 00:13:51.583 ] 00:13:51.583 } 00:13:51.583 } 00:13:51.583 }' 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:51.583 BaseBdev2 00:13:51.583 BaseBdev3' 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.583 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.584 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:51.584 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.584 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.584 [2024-11-20 13:27:33.206450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.584 [2024-11-20 13:27:33.206564] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.584 [2024-11-20 13:27:33.206680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.584 [2024-11-20 13:27:33.207009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.584 [2024-11-20 13:27:33.207027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:51.584 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.584 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 90800 00:13:51.584 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 90800 ']' 00:13:51.584 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 90800 00:13:51.584 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:51.584 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.584 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90800 00:13:51.842 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.842 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.842 killing process with pid 90800 00:13:51.842 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90800' 00:13:51.842 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 90800 00:13:51.842 [2024-11-20 13:27:33.257062] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.842 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 90800 00:13:51.842 [2024-11-20 13:27:33.290507] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.842 13:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:51.842 00:13:51.842 real 0m9.112s 00:13:51.842 user 0m15.709s 00:13:51.842 sys 0m1.646s 00:13:51.842 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.842 13:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.842 ************************************ 00:13:51.842 END TEST raid5f_state_function_test_sb 00:13:51.842 ************************************ 00:13:52.101 13:27:33 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:52.101 13:27:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:52.101 13:27:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.101 13:27:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.101 ************************************ 00:13:52.101 START TEST raid5f_superblock_test 00:13:52.101 ************************************ 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91404 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91404 00:13:52.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 91404 ']' 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.101 13:27:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.101 [2024-11-20 13:27:33.666097] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:52.101 [2024-11-20 13:27:33.666352] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91404 ] 00:13:52.360 [2024-11-20 13:27:33.837241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.360 [2024-11-20 13:27:33.868929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.360 [2024-11-20 13:27:33.915465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.360 [2024-11-20 13:27:33.915515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.298 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.298 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.299 malloc1 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.299 [2024-11-20 13:27:34.628872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:53.299 [2024-11-20 13:27:34.628964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.299 [2024-11-20 13:27:34.629020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:53.299 [2024-11-20 13:27:34.629040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.299 [2024-11-20 13:27:34.631688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.299 [2024-11-20 13:27:34.631741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:53.299 pt1 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.299 malloc2 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.299 [2024-11-20 13:27:34.658787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:53.299 [2024-11-20 13:27:34.658873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.299 [2024-11-20 13:27:34.658897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:53.299 [2024-11-20 13:27:34.658910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.299 [2024-11-20 13:27:34.661596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.299 [2024-11-20 13:27:34.661708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:53.299 pt2 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.299 malloc3 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.299 [2024-11-20 13:27:34.692506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:53.299 [2024-11-20 13:27:34.692670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.299 [2024-11-20 13:27:34.692733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:53.299 [2024-11-20 13:27:34.692773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.299 [2024-11-20 13:27:34.695457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.299 [2024-11-20 13:27:34.695581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:53.299 pt3 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.299 [2024-11-20 13:27:34.700569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:53.299 [2024-11-20 13:27:34.702887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:53.299 [2024-11-20 13:27:34.703035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:53.299 [2024-11-20 13:27:34.703283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:53.299 [2024-11-20 13:27:34.703335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:53.299 [2024-11-20 13:27:34.703706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:53.299 [2024-11-20 13:27:34.704273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:53.299 [2024-11-20 13:27:34.704332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:53.299 [2024-11-20 13:27:34.704613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.299 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.299 "name": "raid_bdev1", 00:13:53.299 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:53.299 "strip_size_kb": 64, 00:13:53.299 "state": "online", 00:13:53.299 "raid_level": "raid5f", 00:13:53.299 "superblock": true, 00:13:53.299 "num_base_bdevs": 3, 00:13:53.299 "num_base_bdevs_discovered": 3, 00:13:53.299 "num_base_bdevs_operational": 3, 00:13:53.299 "base_bdevs_list": [ 00:13:53.299 { 00:13:53.299 "name": "pt1", 00:13:53.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.299 "is_configured": true, 00:13:53.299 "data_offset": 2048, 00:13:53.299 "data_size": 63488 00:13:53.299 }, 00:13:53.299 { 00:13:53.299 "name": "pt2", 00:13:53.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.299 "is_configured": true, 00:13:53.299 "data_offset": 2048, 00:13:53.299 "data_size": 63488 00:13:53.300 }, 00:13:53.300 { 00:13:53.300 "name": "pt3", 00:13:53.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.300 "is_configured": true, 00:13:53.300 "data_offset": 2048, 00:13:53.300 "data_size": 63488 00:13:53.300 } 00:13:53.300 ] 00:13:53.300 }' 00:13:53.300 13:27:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.300 13:27:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.564 [2024-11-20 13:27:35.160189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.564 "name": "raid_bdev1", 00:13:53.564 "aliases": [ 00:13:53.564 "7ed88af9-78dc-40fc-947a-d046517202af" 00:13:53.564 ], 00:13:53.564 "product_name": "Raid Volume", 00:13:53.564 "block_size": 512, 00:13:53.564 "num_blocks": 126976, 00:13:53.564 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:53.564 "assigned_rate_limits": { 00:13:53.564 "rw_ios_per_sec": 0, 00:13:53.564 "rw_mbytes_per_sec": 0, 00:13:53.564 "r_mbytes_per_sec": 0, 00:13:53.564 "w_mbytes_per_sec": 0 00:13:53.564 }, 00:13:53.564 "claimed": false, 00:13:53.564 "zoned": false, 00:13:53.564 "supported_io_types": { 00:13:53.564 "read": true, 00:13:53.564 "write": true, 00:13:53.564 "unmap": false, 00:13:53.564 "flush": false, 00:13:53.564 "reset": true, 00:13:53.564 "nvme_admin": false, 00:13:53.564 "nvme_io": false, 00:13:53.564 "nvme_io_md": false, 00:13:53.564 "write_zeroes": true, 00:13:53.564 "zcopy": false, 00:13:53.564 "get_zone_info": false, 00:13:53.564 "zone_management": false, 00:13:53.564 "zone_append": false, 00:13:53.564 "compare": false, 00:13:53.564 "compare_and_write": false, 00:13:53.564 "abort": false, 00:13:53.564 "seek_hole": false, 00:13:53.564 "seek_data": false, 00:13:53.564 "copy": false, 00:13:53.564 "nvme_iov_md": false 00:13:53.564 }, 00:13:53.564 "driver_specific": { 00:13:53.564 "raid": { 00:13:53.564 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:53.564 "strip_size_kb": 64, 00:13:53.564 "state": "online", 00:13:53.564 "raid_level": "raid5f", 00:13:53.564 "superblock": true, 00:13:53.564 "num_base_bdevs": 3, 00:13:53.564 "num_base_bdevs_discovered": 3, 00:13:53.564 "num_base_bdevs_operational": 3, 00:13:53.564 "base_bdevs_list": [ 00:13:53.564 { 00:13:53.564 "name": "pt1", 00:13:53.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.564 "is_configured": true, 00:13:53.564 "data_offset": 2048, 00:13:53.564 "data_size": 63488 00:13:53.564 }, 00:13:53.564 { 00:13:53.564 "name": "pt2", 00:13:53.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.564 "is_configured": true, 00:13:53.564 "data_offset": 2048, 00:13:53.564 "data_size": 63488 00:13:53.564 }, 00:13:53.564 { 00:13:53.564 "name": "pt3", 00:13:53.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.564 "is_configured": true, 00:13:53.564 "data_offset": 2048, 00:13:53.564 "data_size": 63488 00:13:53.564 } 00:13:53.564 ] 00:13:53.564 } 00:13:53.564 } 00:13:53.564 }' 00:13:53.564 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:53.853 pt2 00:13:53.853 pt3' 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.853 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.854 [2024-11-20 13:27:35.407884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7ed88af9-78dc-40fc-947a-d046517202af 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7ed88af9-78dc-40fc-947a-d046517202af ']' 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.854 [2024-11-20 13:27:35.455598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.854 [2024-11-20 13:27:35.455635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.854 [2024-11-20 13:27:35.455734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.854 [2024-11-20 13:27:35.455817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.854 [2024-11-20 13:27:35.455840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.854 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.113 [2024-11-20 13:27:35.611397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:54.113 [2024-11-20 13:27:35.613799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:54.113 [2024-11-20 13:27:35.613862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:54.113 [2024-11-20 13:27:35.613929] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:54.113 [2024-11-20 13:27:35.614025] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:54.113 [2024-11-20 13:27:35.614055] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:54.113 [2024-11-20 13:27:35.614071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:54.113 [2024-11-20 13:27:35.614087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:13:54.113 request: 00:13:54.113 { 00:13:54.113 "name": "raid_bdev1", 00:13:54.113 "raid_level": "raid5f", 00:13:54.113 "base_bdevs": [ 00:13:54.113 "malloc1", 00:13:54.113 "malloc2", 00:13:54.113 "malloc3" 00:13:54.113 ], 00:13:54.113 "strip_size_kb": 64, 00:13:54.113 "superblock": false, 00:13:54.113 "method": "bdev_raid_create", 00:13:54.113 "req_id": 1 00:13:54.113 } 00:13:54.113 Got JSON-RPC error response 00:13:54.113 response: 00:13:54.113 { 00:13:54.113 "code": -17, 00:13:54.113 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:54.113 } 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.113 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.114 [2024-11-20 13:27:35.679207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:54.114 [2024-11-20 13:27:35.679343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.114 [2024-11-20 13:27:35.679393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:54.114 [2024-11-20 13:27:35.679430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.114 [2024-11-20 13:27:35.682049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.114 [2024-11-20 13:27:35.682144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:54.114 [2024-11-20 13:27:35.682276] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:54.114 [2024-11-20 13:27:35.682356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:54.114 pt1 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.114 "name": "raid_bdev1", 00:13:54.114 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:54.114 "strip_size_kb": 64, 00:13:54.114 "state": "configuring", 00:13:54.114 "raid_level": "raid5f", 00:13:54.114 "superblock": true, 00:13:54.114 "num_base_bdevs": 3, 00:13:54.114 "num_base_bdevs_discovered": 1, 00:13:54.114 "num_base_bdevs_operational": 3, 00:13:54.114 "base_bdevs_list": [ 00:13:54.114 { 00:13:54.114 "name": "pt1", 00:13:54.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.114 "is_configured": true, 00:13:54.114 "data_offset": 2048, 00:13:54.114 "data_size": 63488 00:13:54.114 }, 00:13:54.114 { 00:13:54.114 "name": null, 00:13:54.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.114 "is_configured": false, 00:13:54.114 "data_offset": 2048, 00:13:54.114 "data_size": 63488 00:13:54.114 }, 00:13:54.114 { 00:13:54.114 "name": null, 00:13:54.114 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.114 "is_configured": false, 00:13:54.114 "data_offset": 2048, 00:13:54.114 "data_size": 63488 00:13:54.114 } 00:13:54.114 ] 00:13:54.114 }' 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.114 13:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.680 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:54.680 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:54.680 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.681 [2024-11-20 13:27:36.134445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:54.681 [2024-11-20 13:27:36.134542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.681 [2024-11-20 13:27:36.134569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:54.681 [2024-11-20 13:27:36.134585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.681 [2024-11-20 13:27:36.135085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.681 [2024-11-20 13:27:36.135108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:54.681 [2024-11-20 13:27:36.135195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:54.681 [2024-11-20 13:27:36.135223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:54.681 pt2 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.681 [2024-11-20 13:27:36.146469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.681 "name": "raid_bdev1", 00:13:54.681 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:54.681 "strip_size_kb": 64, 00:13:54.681 "state": "configuring", 00:13:54.681 "raid_level": "raid5f", 00:13:54.681 "superblock": true, 00:13:54.681 "num_base_bdevs": 3, 00:13:54.681 "num_base_bdevs_discovered": 1, 00:13:54.681 "num_base_bdevs_operational": 3, 00:13:54.681 "base_bdevs_list": [ 00:13:54.681 { 00:13:54.681 "name": "pt1", 00:13:54.681 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.681 "is_configured": true, 00:13:54.681 "data_offset": 2048, 00:13:54.681 "data_size": 63488 00:13:54.681 }, 00:13:54.681 { 00:13:54.681 "name": null, 00:13:54.681 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.681 "is_configured": false, 00:13:54.681 "data_offset": 0, 00:13:54.681 "data_size": 63488 00:13:54.681 }, 00:13:54.681 { 00:13:54.681 "name": null, 00:13:54.681 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.681 "is_configured": false, 00:13:54.681 "data_offset": 2048, 00:13:54.681 "data_size": 63488 00:13:54.681 } 00:13:54.681 ] 00:13:54.681 }' 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.681 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.248 [2024-11-20 13:27:36.645575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:55.248 [2024-11-20 13:27:36.645745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.248 [2024-11-20 13:27:36.645775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:55.248 [2024-11-20 13:27:36.645785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.248 [2024-11-20 13:27:36.646283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.248 [2024-11-20 13:27:36.646305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:55.248 [2024-11-20 13:27:36.646393] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:55.248 [2024-11-20 13:27:36.646419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:55.248 pt2 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.248 [2024-11-20 13:27:36.657587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:55.248 [2024-11-20 13:27:36.657671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.248 [2024-11-20 13:27:36.657700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:55.248 [2024-11-20 13:27:36.657711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.248 [2024-11-20 13:27:36.658205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.248 [2024-11-20 13:27:36.658231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:55.248 [2024-11-20 13:27:36.658319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:55.248 [2024-11-20 13:27:36.658345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:55.248 [2024-11-20 13:27:36.658466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:55.248 [2024-11-20 13:27:36.658491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:55.248 [2024-11-20 13:27:36.658759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:55.248 [2024-11-20 13:27:36.659240] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:55.248 [2024-11-20 13:27:36.659258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:13:55.248 [2024-11-20 13:27:36.659381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.248 pt3 00:13:55.248 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.249 "name": "raid_bdev1", 00:13:55.249 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:55.249 "strip_size_kb": 64, 00:13:55.249 "state": "online", 00:13:55.249 "raid_level": "raid5f", 00:13:55.249 "superblock": true, 00:13:55.249 "num_base_bdevs": 3, 00:13:55.249 "num_base_bdevs_discovered": 3, 00:13:55.249 "num_base_bdevs_operational": 3, 00:13:55.249 "base_bdevs_list": [ 00:13:55.249 { 00:13:55.249 "name": "pt1", 00:13:55.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:55.249 "is_configured": true, 00:13:55.249 "data_offset": 2048, 00:13:55.249 "data_size": 63488 00:13:55.249 }, 00:13:55.249 { 00:13:55.249 "name": "pt2", 00:13:55.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.249 "is_configured": true, 00:13:55.249 "data_offset": 2048, 00:13:55.249 "data_size": 63488 00:13:55.249 }, 00:13:55.249 { 00:13:55.249 "name": "pt3", 00:13:55.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.249 "is_configured": true, 00:13:55.249 "data_offset": 2048, 00:13:55.249 "data_size": 63488 00:13:55.249 } 00:13:55.249 ] 00:13:55.249 }' 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.249 13:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.508 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:55.508 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:55.508 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.508 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.508 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.508 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.508 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.508 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.508 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.767 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.767 [2024-11-20 13:27:37.181089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.767 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.767 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.767 "name": "raid_bdev1", 00:13:55.767 "aliases": [ 00:13:55.767 "7ed88af9-78dc-40fc-947a-d046517202af" 00:13:55.767 ], 00:13:55.767 "product_name": "Raid Volume", 00:13:55.767 "block_size": 512, 00:13:55.767 "num_blocks": 126976, 00:13:55.767 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:55.767 "assigned_rate_limits": { 00:13:55.767 "rw_ios_per_sec": 0, 00:13:55.767 "rw_mbytes_per_sec": 0, 00:13:55.767 "r_mbytes_per_sec": 0, 00:13:55.767 "w_mbytes_per_sec": 0 00:13:55.767 }, 00:13:55.767 "claimed": false, 00:13:55.767 "zoned": false, 00:13:55.767 "supported_io_types": { 00:13:55.767 "read": true, 00:13:55.767 "write": true, 00:13:55.768 "unmap": false, 00:13:55.768 "flush": false, 00:13:55.768 "reset": true, 00:13:55.768 "nvme_admin": false, 00:13:55.768 "nvme_io": false, 00:13:55.768 "nvme_io_md": false, 00:13:55.768 "write_zeroes": true, 00:13:55.768 "zcopy": false, 00:13:55.768 "get_zone_info": false, 00:13:55.768 "zone_management": false, 00:13:55.768 "zone_append": false, 00:13:55.768 "compare": false, 00:13:55.768 "compare_and_write": false, 00:13:55.768 "abort": false, 00:13:55.768 "seek_hole": false, 00:13:55.768 "seek_data": false, 00:13:55.768 "copy": false, 00:13:55.768 "nvme_iov_md": false 00:13:55.768 }, 00:13:55.768 "driver_specific": { 00:13:55.768 "raid": { 00:13:55.768 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:55.768 "strip_size_kb": 64, 00:13:55.768 "state": "online", 00:13:55.768 "raid_level": "raid5f", 00:13:55.768 "superblock": true, 00:13:55.768 "num_base_bdevs": 3, 00:13:55.768 "num_base_bdevs_discovered": 3, 00:13:55.768 "num_base_bdevs_operational": 3, 00:13:55.768 "base_bdevs_list": [ 00:13:55.768 { 00:13:55.768 "name": "pt1", 00:13:55.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:55.768 "is_configured": true, 00:13:55.768 "data_offset": 2048, 00:13:55.768 "data_size": 63488 00:13:55.768 }, 00:13:55.768 { 00:13:55.768 "name": "pt2", 00:13:55.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.768 "is_configured": true, 00:13:55.768 "data_offset": 2048, 00:13:55.768 "data_size": 63488 00:13:55.768 }, 00:13:55.768 { 00:13:55.768 "name": "pt3", 00:13:55.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.768 "is_configured": true, 00:13:55.768 "data_offset": 2048, 00:13:55.768 "data_size": 63488 00:13:55.768 } 00:13:55.768 ] 00:13:55.768 } 00:13:55.768 } 00:13:55.768 }' 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:55.768 pt2 00:13:55.768 pt3' 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.768 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:56.027 [2024-11-20 13:27:37.480526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7ed88af9-78dc-40fc-947a-d046517202af '!=' 7ed88af9-78dc-40fc-947a-d046517202af ']' 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.027 [2024-11-20 13:27:37.528308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.027 "name": "raid_bdev1", 00:13:56.027 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:56.027 "strip_size_kb": 64, 00:13:56.027 "state": "online", 00:13:56.027 "raid_level": "raid5f", 00:13:56.027 "superblock": true, 00:13:56.027 "num_base_bdevs": 3, 00:13:56.027 "num_base_bdevs_discovered": 2, 00:13:56.027 "num_base_bdevs_operational": 2, 00:13:56.027 "base_bdevs_list": [ 00:13:56.027 { 00:13:56.027 "name": null, 00:13:56.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.027 "is_configured": false, 00:13:56.027 "data_offset": 0, 00:13:56.027 "data_size": 63488 00:13:56.027 }, 00:13:56.027 { 00:13:56.027 "name": "pt2", 00:13:56.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.027 "is_configured": true, 00:13:56.027 "data_offset": 2048, 00:13:56.027 "data_size": 63488 00:13:56.027 }, 00:13:56.027 { 00:13:56.027 "name": "pt3", 00:13:56.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.027 "is_configured": true, 00:13:56.027 "data_offset": 2048, 00:13:56.027 "data_size": 63488 00:13:56.027 } 00:13:56.027 ] 00:13:56.027 }' 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.027 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.595 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.595 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.596 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.596 [2024-11-20 13:27:37.983480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.596 [2024-11-20 13:27:37.983629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.596 [2024-11-20 13:27:37.983754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.596 [2024-11-20 13:27:37.983857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.596 [2024-11-20 13:27:37.983910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:13:56.596 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.596 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.596 13:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:56.596 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.596 13:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.596 [2024-11-20 13:27:38.071315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:56.596 [2024-11-20 13:27:38.071476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.596 [2024-11-20 13:27:38.071528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:56.596 [2024-11-20 13:27:38.071566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.596 [2024-11-20 13:27:38.074183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.596 [2024-11-20 13:27:38.074272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:56.596 [2024-11-20 13:27:38.074399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:56.596 [2024-11-20 13:27:38.074492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:56.596 pt2 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.596 "name": "raid_bdev1", 00:13:56.596 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:56.596 "strip_size_kb": 64, 00:13:56.596 "state": "configuring", 00:13:56.596 "raid_level": "raid5f", 00:13:56.596 "superblock": true, 00:13:56.596 "num_base_bdevs": 3, 00:13:56.596 "num_base_bdevs_discovered": 1, 00:13:56.596 "num_base_bdevs_operational": 2, 00:13:56.596 "base_bdevs_list": [ 00:13:56.596 { 00:13:56.596 "name": null, 00:13:56.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.596 "is_configured": false, 00:13:56.596 "data_offset": 2048, 00:13:56.596 "data_size": 63488 00:13:56.596 }, 00:13:56.596 { 00:13:56.596 "name": "pt2", 00:13:56.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.596 "is_configured": true, 00:13:56.596 "data_offset": 2048, 00:13:56.596 "data_size": 63488 00:13:56.596 }, 00:13:56.596 { 00:13:56.596 "name": null, 00:13:56.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.596 "is_configured": false, 00:13:56.596 "data_offset": 2048, 00:13:56.596 "data_size": 63488 00:13:56.596 } 00:13:56.596 ] 00:13:56.596 }' 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.596 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.163 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:57.163 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.164 [2024-11-20 13:27:38.550514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:57.164 [2024-11-20 13:27:38.550679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.164 [2024-11-20 13:27:38.550756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:57.164 [2024-11-20 13:27:38.550792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.164 [2024-11-20 13:27:38.551315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.164 [2024-11-20 13:27:38.551398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:57.164 [2024-11-20 13:27:38.551534] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:57.164 [2024-11-20 13:27:38.551594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:57.164 [2024-11-20 13:27:38.551731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:57.164 [2024-11-20 13:27:38.551775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.164 [2024-11-20 13:27:38.552087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:57.164 [2024-11-20 13:27:38.552697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:57.164 [2024-11-20 13:27:38.552760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:13:57.164 [2024-11-20 13:27:38.553125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.164 pt3 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.164 "name": "raid_bdev1", 00:13:57.164 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:57.164 "strip_size_kb": 64, 00:13:57.164 "state": "online", 00:13:57.164 "raid_level": "raid5f", 00:13:57.164 "superblock": true, 00:13:57.164 "num_base_bdevs": 3, 00:13:57.164 "num_base_bdevs_discovered": 2, 00:13:57.164 "num_base_bdevs_operational": 2, 00:13:57.164 "base_bdevs_list": [ 00:13:57.164 { 00:13:57.164 "name": null, 00:13:57.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.164 "is_configured": false, 00:13:57.164 "data_offset": 2048, 00:13:57.164 "data_size": 63488 00:13:57.164 }, 00:13:57.164 { 00:13:57.164 "name": "pt2", 00:13:57.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.164 "is_configured": true, 00:13:57.164 "data_offset": 2048, 00:13:57.164 "data_size": 63488 00:13:57.164 }, 00:13:57.164 { 00:13:57.164 "name": "pt3", 00:13:57.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:57.164 "is_configured": true, 00:13:57.164 "data_offset": 2048, 00:13:57.164 "data_size": 63488 00:13:57.164 } 00:13:57.164 ] 00:13:57.164 }' 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.164 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.424 13:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:57.424 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.424 13:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.424 [2024-11-20 13:27:39.005901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.424 [2024-11-20 13:27:39.005961] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.424 [2024-11-20 13:27:39.006112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.424 [2024-11-20 13:27:39.006202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.424 [2024-11-20 13:27:39.006219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.424 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.425 [2024-11-20 13:27:39.073748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:57.425 [2024-11-20 13:27:39.073952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.425 [2024-11-20 13:27:39.073980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:57.425 [2024-11-20 13:27:39.074011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.425 [2024-11-20 13:27:39.077155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.425 [2024-11-20 13:27:39.077208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:57.425 [2024-11-20 13:27:39.077320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:57.425 [2024-11-20 13:27:39.077383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:57.425 [2024-11-20 13:27:39.077535] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:57.425 [2024-11-20 13:27:39.077559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.425 [2024-11-20 13:27:39.077583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:13:57.425 [2024-11-20 13:27:39.077635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:57.425 pt1 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.425 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.684 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.684 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.684 "name": "raid_bdev1", 00:13:57.684 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:57.684 "strip_size_kb": 64, 00:13:57.684 "state": "configuring", 00:13:57.684 "raid_level": "raid5f", 00:13:57.684 "superblock": true, 00:13:57.684 "num_base_bdevs": 3, 00:13:57.684 "num_base_bdevs_discovered": 1, 00:13:57.684 "num_base_bdevs_operational": 2, 00:13:57.684 "base_bdevs_list": [ 00:13:57.684 { 00:13:57.684 "name": null, 00:13:57.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.684 "is_configured": false, 00:13:57.684 "data_offset": 2048, 00:13:57.684 "data_size": 63488 00:13:57.684 }, 00:13:57.684 { 00:13:57.684 "name": "pt2", 00:13:57.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.684 "is_configured": true, 00:13:57.684 "data_offset": 2048, 00:13:57.684 "data_size": 63488 00:13:57.684 }, 00:13:57.684 { 00:13:57.684 "name": null, 00:13:57.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:57.684 "is_configured": false, 00:13:57.684 "data_offset": 2048, 00:13:57.684 "data_size": 63488 00:13:57.684 } 00:13:57.684 ] 00:13:57.684 }' 00:13:57.684 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.684 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.944 [2024-11-20 13:27:39.601038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:57.944 [2024-11-20 13:27:39.601154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.944 [2024-11-20 13:27:39.601181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:57.944 [2024-11-20 13:27:39.601196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.944 [2024-11-20 13:27:39.601805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.944 [2024-11-20 13:27:39.601846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:57.944 [2024-11-20 13:27:39.601953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:57.944 [2024-11-20 13:27:39.601991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:57.944 [2024-11-20 13:27:39.602143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:13:57.944 [2024-11-20 13:27:39.602176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.944 [2024-11-20 13:27:39.602499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:57.944 [2024-11-20 13:27:39.603168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:13:57.944 [2024-11-20 13:27:39.603185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:13:57.944 [2024-11-20 13:27:39.603418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.944 pt3 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.944 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.208 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.208 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.208 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.208 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.208 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.208 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.208 "name": "raid_bdev1", 00:13:58.208 "uuid": "7ed88af9-78dc-40fc-947a-d046517202af", 00:13:58.208 "strip_size_kb": 64, 00:13:58.208 "state": "online", 00:13:58.208 "raid_level": "raid5f", 00:13:58.208 "superblock": true, 00:13:58.208 "num_base_bdevs": 3, 00:13:58.208 "num_base_bdevs_discovered": 2, 00:13:58.208 "num_base_bdevs_operational": 2, 00:13:58.208 "base_bdevs_list": [ 00:13:58.208 { 00:13:58.208 "name": null, 00:13:58.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.208 "is_configured": false, 00:13:58.208 "data_offset": 2048, 00:13:58.208 "data_size": 63488 00:13:58.208 }, 00:13:58.208 { 00:13:58.208 "name": "pt2", 00:13:58.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:58.208 "is_configured": true, 00:13:58.208 "data_offset": 2048, 00:13:58.208 "data_size": 63488 00:13:58.208 }, 00:13:58.208 { 00:13:58.208 "name": "pt3", 00:13:58.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:58.208 "is_configured": true, 00:13:58.208 "data_offset": 2048, 00:13:58.208 "data_size": 63488 00:13:58.208 } 00:13:58.208 ] 00:13:58.208 }' 00:13:58.208 13:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.208 13:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.484 [2024-11-20 13:27:40.085582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7ed88af9-78dc-40fc-947a-d046517202af '!=' 7ed88af9-78dc-40fc-947a-d046517202af ']' 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91404 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 91404 ']' 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 91404 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.484 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91404 00:13:58.743 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.743 killing process with pid 91404 00:13:58.743 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.743 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91404' 00:13:58.743 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 91404 00:13:58.743 [2024-11-20 13:27:40.156968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.743 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 91404 00:13:58.743 [2024-11-20 13:27:40.157170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.743 [2024-11-20 13:27:40.157260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.743 [2024-11-20 13:27:40.157281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:13:58.744 [2024-11-20 13:27:40.222325] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.002 13:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:59.002 00:13:59.002 real 0m7.007s 00:13:59.002 user 0m11.690s 00:13:59.002 sys 0m1.452s 00:13:59.002 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.002 ************************************ 00:13:59.002 END TEST raid5f_superblock_test 00:13:59.002 ************************************ 00:13:59.002 13:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.002 13:27:40 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:59.002 13:27:40 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:59.002 13:27:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:59.002 13:27:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.002 13:27:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.002 ************************************ 00:13:59.002 START TEST raid5f_rebuild_test 00:13:59.002 ************************************ 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=91842 00:13:59.002 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:59.003 13:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 91842 00:13:59.003 13:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 91842 ']' 00:13:59.003 13:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.003 13:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.003 13:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.003 13:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.003 13:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.262 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:59.262 Zero copy mechanism will not be used. 00:13:59.262 [2024-11-20 13:27:40.738168] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:59.262 [2024-11-20 13:27:40.738321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91842 ] 00:13:59.262 [2024-11-20 13:27:40.897835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.522 [2024-11-20 13:27:40.944866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.522 [2024-11-20 13:27:41.028374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.522 [2024-11-20 13:27:41.028425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 BaseBdev1_malloc 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 [2024-11-20 13:27:41.639248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:00.091 [2024-11-20 13:27:41.639345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.091 [2024-11-20 13:27:41.639391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:00.091 [2024-11-20 13:27:41.639408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.091 [2024-11-20 13:27:41.642208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.091 [2024-11-20 13:27:41.642246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:00.091 BaseBdev1 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 BaseBdev2_malloc 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 [2024-11-20 13:27:41.674914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:00.091 [2024-11-20 13:27:41.675003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.091 [2024-11-20 13:27:41.675031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:00.091 [2024-11-20 13:27:41.675041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.091 [2024-11-20 13:27:41.677743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.091 [2024-11-20 13:27:41.677791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:00.091 BaseBdev2 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 BaseBdev3_malloc 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 [2024-11-20 13:27:41.710766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:00.091 [2024-11-20 13:27:41.710867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.091 [2024-11-20 13:27:41.710902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:00.091 [2024-11-20 13:27:41.710913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.091 [2024-11-20 13:27:41.713808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.091 [2024-11-20 13:27:41.713849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:00.091 BaseBdev3 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 spare_malloc 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.091 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.350 spare_delay 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.350 [2024-11-20 13:27:41.766635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:00.350 [2024-11-20 13:27:41.766732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.350 [2024-11-20 13:27:41.766774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:00.350 [2024-11-20 13:27:41.766785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.350 [2024-11-20 13:27:41.769689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.350 [2024-11-20 13:27:41.769735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:00.350 spare 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.350 [2024-11-20 13:27:41.778689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.350 [2024-11-20 13:27:41.781189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.350 [2024-11-20 13:27:41.781301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.350 [2024-11-20 13:27:41.781439] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:00.350 [2024-11-20 13:27:41.781493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:00.350 [2024-11-20 13:27:41.781831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:00.350 [2024-11-20 13:27:41.782397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:00.350 [2024-11-20 13:27:41.782446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:00.350 [2024-11-20 13:27:41.782725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.350 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.351 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.351 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.351 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.351 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.351 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.351 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.351 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.351 "name": "raid_bdev1", 00:14:00.351 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:00.351 "strip_size_kb": 64, 00:14:00.351 "state": "online", 00:14:00.351 "raid_level": "raid5f", 00:14:00.351 "superblock": false, 00:14:00.351 "num_base_bdevs": 3, 00:14:00.351 "num_base_bdevs_discovered": 3, 00:14:00.351 "num_base_bdevs_operational": 3, 00:14:00.351 "base_bdevs_list": [ 00:14:00.351 { 00:14:00.351 "name": "BaseBdev1", 00:14:00.351 "uuid": "052c35e7-61ca-52b5-b21a-8d2d5e4afa8a", 00:14:00.351 "is_configured": true, 00:14:00.351 "data_offset": 0, 00:14:00.351 "data_size": 65536 00:14:00.351 }, 00:14:00.351 { 00:14:00.351 "name": "BaseBdev2", 00:14:00.351 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:00.351 "is_configured": true, 00:14:00.351 "data_offset": 0, 00:14:00.351 "data_size": 65536 00:14:00.351 }, 00:14:00.351 { 00:14:00.351 "name": "BaseBdev3", 00:14:00.351 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:00.351 "is_configured": true, 00:14:00.351 "data_offset": 0, 00:14:00.351 "data_size": 65536 00:14:00.351 } 00:14:00.351 ] 00:14:00.351 }' 00:14:00.351 13:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.351 13:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.610 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.610 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:00.610 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.610 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.610 [2024-11-20 13:27:42.274163] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.870 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:01.130 [2024-11-20 13:27:42.545560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:01.130 /dev/nbd0 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.130 1+0 records in 00:14:01.130 1+0 records out 00:14:01.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670979 s, 6.1 MB/s 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:01.130 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.131 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.131 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:01.131 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:01.131 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:01.131 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:01.390 512+0 records in 00:14:01.390 512+0 records out 00:14:01.390 67108864 bytes (67 MB, 64 MiB) copied, 0.363793 s, 184 MB/s 00:14:01.390 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:01.390 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.390 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:01.390 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.390 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:01.390 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.390 13:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.649 [2024-11-20 13:27:43.228161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.649 [2024-11-20 13:27:43.246663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.649 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.649 "name": "raid_bdev1", 00:14:01.649 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:01.649 "strip_size_kb": 64, 00:14:01.649 "state": "online", 00:14:01.649 "raid_level": "raid5f", 00:14:01.649 "superblock": false, 00:14:01.649 "num_base_bdevs": 3, 00:14:01.649 "num_base_bdevs_discovered": 2, 00:14:01.649 "num_base_bdevs_operational": 2, 00:14:01.649 "base_bdevs_list": [ 00:14:01.649 { 00:14:01.649 "name": null, 00:14:01.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.649 "is_configured": false, 00:14:01.649 "data_offset": 0, 00:14:01.649 "data_size": 65536 00:14:01.649 }, 00:14:01.649 { 00:14:01.649 "name": "BaseBdev2", 00:14:01.649 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:01.650 "is_configured": true, 00:14:01.650 "data_offset": 0, 00:14:01.650 "data_size": 65536 00:14:01.650 }, 00:14:01.650 { 00:14:01.650 "name": "BaseBdev3", 00:14:01.650 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:01.650 "is_configured": true, 00:14:01.650 "data_offset": 0, 00:14:01.650 "data_size": 65536 00:14:01.650 } 00:14:01.650 ] 00:14:01.650 }' 00:14:01.650 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.650 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.218 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.218 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.218 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.218 [2024-11-20 13:27:43.686082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.218 [2024-11-20 13:27:43.694945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:14:02.218 13:27:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.218 13:27:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:02.218 [2024-11-20 13:27:43.697704] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.157 "name": "raid_bdev1", 00:14:03.157 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:03.157 "strip_size_kb": 64, 00:14:03.157 "state": "online", 00:14:03.157 "raid_level": "raid5f", 00:14:03.157 "superblock": false, 00:14:03.157 "num_base_bdevs": 3, 00:14:03.157 "num_base_bdevs_discovered": 3, 00:14:03.157 "num_base_bdevs_operational": 3, 00:14:03.157 "process": { 00:14:03.157 "type": "rebuild", 00:14:03.157 "target": "spare", 00:14:03.157 "progress": { 00:14:03.157 "blocks": 20480, 00:14:03.157 "percent": 15 00:14:03.157 } 00:14:03.157 }, 00:14:03.157 "base_bdevs_list": [ 00:14:03.157 { 00:14:03.157 "name": "spare", 00:14:03.157 "uuid": "f71b9c06-6a7c-5645-a119-20b1deb68146", 00:14:03.157 "is_configured": true, 00:14:03.157 "data_offset": 0, 00:14:03.157 "data_size": 65536 00:14:03.157 }, 00:14:03.157 { 00:14:03.157 "name": "BaseBdev2", 00:14:03.157 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:03.157 "is_configured": true, 00:14:03.157 "data_offset": 0, 00:14:03.157 "data_size": 65536 00:14:03.157 }, 00:14:03.157 { 00:14:03.157 "name": "BaseBdev3", 00:14:03.157 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:03.157 "is_configured": true, 00:14:03.157 "data_offset": 0, 00:14:03.157 "data_size": 65536 00:14:03.157 } 00:14:03.157 ] 00:14:03.157 }' 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.157 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.468 [2024-11-20 13:27:44.846551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.468 [2024-11-20 13:27:44.913847] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:03.468 [2024-11-20 13:27:44.914033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.468 [2024-11-20 13:27:44.914076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.468 [2024-11-20 13:27:44.914092] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.468 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.469 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.469 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.469 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.469 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.469 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.469 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.469 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.469 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.469 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.469 "name": "raid_bdev1", 00:14:03.469 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:03.469 "strip_size_kb": 64, 00:14:03.469 "state": "online", 00:14:03.469 "raid_level": "raid5f", 00:14:03.469 "superblock": false, 00:14:03.469 "num_base_bdevs": 3, 00:14:03.469 "num_base_bdevs_discovered": 2, 00:14:03.469 "num_base_bdevs_operational": 2, 00:14:03.469 "base_bdevs_list": [ 00:14:03.469 { 00:14:03.469 "name": null, 00:14:03.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.469 "is_configured": false, 00:14:03.469 "data_offset": 0, 00:14:03.469 "data_size": 65536 00:14:03.469 }, 00:14:03.469 { 00:14:03.469 "name": "BaseBdev2", 00:14:03.469 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:03.469 "is_configured": true, 00:14:03.469 "data_offset": 0, 00:14:03.469 "data_size": 65536 00:14:03.469 }, 00:14:03.469 { 00:14:03.469 "name": "BaseBdev3", 00:14:03.469 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:03.469 "is_configured": true, 00:14:03.469 "data_offset": 0, 00:14:03.469 "data_size": 65536 00:14:03.469 } 00:14:03.469 ] 00:14:03.469 }' 00:14:03.469 13:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.469 13:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.039 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.039 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.039 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.039 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.039 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.039 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.039 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.039 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.039 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.039 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.039 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.039 "name": "raid_bdev1", 00:14:04.039 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:04.039 "strip_size_kb": 64, 00:14:04.039 "state": "online", 00:14:04.039 "raid_level": "raid5f", 00:14:04.039 "superblock": false, 00:14:04.039 "num_base_bdevs": 3, 00:14:04.039 "num_base_bdevs_discovered": 2, 00:14:04.039 "num_base_bdevs_operational": 2, 00:14:04.039 "base_bdevs_list": [ 00:14:04.039 { 00:14:04.039 "name": null, 00:14:04.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.039 "is_configured": false, 00:14:04.039 "data_offset": 0, 00:14:04.039 "data_size": 65536 00:14:04.040 }, 00:14:04.040 { 00:14:04.040 "name": "BaseBdev2", 00:14:04.040 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:04.040 "is_configured": true, 00:14:04.040 "data_offset": 0, 00:14:04.040 "data_size": 65536 00:14:04.040 }, 00:14:04.040 { 00:14:04.040 "name": "BaseBdev3", 00:14:04.040 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:04.040 "is_configured": true, 00:14:04.040 "data_offset": 0, 00:14:04.040 "data_size": 65536 00:14:04.040 } 00:14:04.040 ] 00:14:04.040 }' 00:14:04.040 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.040 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.040 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.040 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.040 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:04.040 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.040 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.040 [2024-11-20 13:27:45.560409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.040 [2024-11-20 13:27:45.569039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:14:04.040 13:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.040 13:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:04.040 [2024-11-20 13:27:45.571914] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.976 "name": "raid_bdev1", 00:14:04.976 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:04.976 "strip_size_kb": 64, 00:14:04.976 "state": "online", 00:14:04.976 "raid_level": "raid5f", 00:14:04.976 "superblock": false, 00:14:04.976 "num_base_bdevs": 3, 00:14:04.976 "num_base_bdevs_discovered": 3, 00:14:04.976 "num_base_bdevs_operational": 3, 00:14:04.976 "process": { 00:14:04.976 "type": "rebuild", 00:14:04.976 "target": "spare", 00:14:04.976 "progress": { 00:14:04.976 "blocks": 20480, 00:14:04.976 "percent": 15 00:14:04.976 } 00:14:04.976 }, 00:14:04.976 "base_bdevs_list": [ 00:14:04.976 { 00:14:04.976 "name": "spare", 00:14:04.976 "uuid": "f71b9c06-6a7c-5645-a119-20b1deb68146", 00:14:04.976 "is_configured": true, 00:14:04.976 "data_offset": 0, 00:14:04.976 "data_size": 65536 00:14:04.976 }, 00:14:04.976 { 00:14:04.976 "name": "BaseBdev2", 00:14:04.976 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:04.976 "is_configured": true, 00:14:04.976 "data_offset": 0, 00:14:04.976 "data_size": 65536 00:14:04.976 }, 00:14:04.976 { 00:14:04.976 "name": "BaseBdev3", 00:14:04.976 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:04.976 "is_configured": true, 00:14:04.976 "data_offset": 0, 00:14:04.976 "data_size": 65536 00:14:04.976 } 00:14:04.976 ] 00:14:04.976 }' 00:14:04.976 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=455 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.236 "name": "raid_bdev1", 00:14:05.236 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:05.236 "strip_size_kb": 64, 00:14:05.236 "state": "online", 00:14:05.236 "raid_level": "raid5f", 00:14:05.236 "superblock": false, 00:14:05.236 "num_base_bdevs": 3, 00:14:05.236 "num_base_bdevs_discovered": 3, 00:14:05.236 "num_base_bdevs_operational": 3, 00:14:05.236 "process": { 00:14:05.236 "type": "rebuild", 00:14:05.236 "target": "spare", 00:14:05.236 "progress": { 00:14:05.236 "blocks": 22528, 00:14:05.236 "percent": 17 00:14:05.236 } 00:14:05.236 }, 00:14:05.236 "base_bdevs_list": [ 00:14:05.236 { 00:14:05.236 "name": "spare", 00:14:05.236 "uuid": "f71b9c06-6a7c-5645-a119-20b1deb68146", 00:14:05.236 "is_configured": true, 00:14:05.236 "data_offset": 0, 00:14:05.236 "data_size": 65536 00:14:05.236 }, 00:14:05.236 { 00:14:05.236 "name": "BaseBdev2", 00:14:05.236 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:05.236 "is_configured": true, 00:14:05.236 "data_offset": 0, 00:14:05.236 "data_size": 65536 00:14:05.236 }, 00:14:05.236 { 00:14:05.236 "name": "BaseBdev3", 00:14:05.236 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:05.236 "is_configured": true, 00:14:05.236 "data_offset": 0, 00:14:05.236 "data_size": 65536 00:14:05.236 } 00:14:05.236 ] 00:14:05.236 }' 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.236 13:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.615 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.616 "name": "raid_bdev1", 00:14:06.616 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:06.616 "strip_size_kb": 64, 00:14:06.616 "state": "online", 00:14:06.616 "raid_level": "raid5f", 00:14:06.616 "superblock": false, 00:14:06.616 "num_base_bdevs": 3, 00:14:06.616 "num_base_bdevs_discovered": 3, 00:14:06.616 "num_base_bdevs_operational": 3, 00:14:06.616 "process": { 00:14:06.616 "type": "rebuild", 00:14:06.616 "target": "spare", 00:14:06.616 "progress": { 00:14:06.616 "blocks": 47104, 00:14:06.616 "percent": 35 00:14:06.616 } 00:14:06.616 }, 00:14:06.616 "base_bdevs_list": [ 00:14:06.616 { 00:14:06.616 "name": "spare", 00:14:06.616 "uuid": "f71b9c06-6a7c-5645-a119-20b1deb68146", 00:14:06.616 "is_configured": true, 00:14:06.616 "data_offset": 0, 00:14:06.616 "data_size": 65536 00:14:06.616 }, 00:14:06.616 { 00:14:06.616 "name": "BaseBdev2", 00:14:06.616 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:06.616 "is_configured": true, 00:14:06.616 "data_offset": 0, 00:14:06.616 "data_size": 65536 00:14:06.616 }, 00:14:06.616 { 00:14:06.616 "name": "BaseBdev3", 00:14:06.616 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:06.616 "is_configured": true, 00:14:06.616 "data_offset": 0, 00:14:06.616 "data_size": 65536 00:14:06.616 } 00:14:06.616 ] 00:14:06.616 }' 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.616 13:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.616 13:27:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.616 13:27:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.554 "name": "raid_bdev1", 00:14:07.554 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:07.554 "strip_size_kb": 64, 00:14:07.554 "state": "online", 00:14:07.554 "raid_level": "raid5f", 00:14:07.554 "superblock": false, 00:14:07.554 "num_base_bdevs": 3, 00:14:07.554 "num_base_bdevs_discovered": 3, 00:14:07.554 "num_base_bdevs_operational": 3, 00:14:07.554 "process": { 00:14:07.554 "type": "rebuild", 00:14:07.554 "target": "spare", 00:14:07.554 "progress": { 00:14:07.554 "blocks": 69632, 00:14:07.554 "percent": 53 00:14:07.554 } 00:14:07.554 }, 00:14:07.554 "base_bdevs_list": [ 00:14:07.554 { 00:14:07.554 "name": "spare", 00:14:07.554 "uuid": "f71b9c06-6a7c-5645-a119-20b1deb68146", 00:14:07.554 "is_configured": true, 00:14:07.554 "data_offset": 0, 00:14:07.554 "data_size": 65536 00:14:07.554 }, 00:14:07.554 { 00:14:07.554 "name": "BaseBdev2", 00:14:07.554 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:07.554 "is_configured": true, 00:14:07.554 "data_offset": 0, 00:14:07.554 "data_size": 65536 00:14:07.554 }, 00:14:07.554 { 00:14:07.554 "name": "BaseBdev3", 00:14:07.554 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:07.554 "is_configured": true, 00:14:07.554 "data_offset": 0, 00:14:07.554 "data_size": 65536 00:14:07.554 } 00:14:07.554 ] 00:14:07.554 }' 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.554 13:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:08.930 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.931 "name": "raid_bdev1", 00:14:08.931 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:08.931 "strip_size_kb": 64, 00:14:08.931 "state": "online", 00:14:08.931 "raid_level": "raid5f", 00:14:08.931 "superblock": false, 00:14:08.931 "num_base_bdevs": 3, 00:14:08.931 "num_base_bdevs_discovered": 3, 00:14:08.931 "num_base_bdevs_operational": 3, 00:14:08.931 "process": { 00:14:08.931 "type": "rebuild", 00:14:08.931 "target": "spare", 00:14:08.931 "progress": { 00:14:08.931 "blocks": 92160, 00:14:08.931 "percent": 70 00:14:08.931 } 00:14:08.931 }, 00:14:08.931 "base_bdevs_list": [ 00:14:08.931 { 00:14:08.931 "name": "spare", 00:14:08.931 "uuid": "f71b9c06-6a7c-5645-a119-20b1deb68146", 00:14:08.931 "is_configured": true, 00:14:08.931 "data_offset": 0, 00:14:08.931 "data_size": 65536 00:14:08.931 }, 00:14:08.931 { 00:14:08.931 "name": "BaseBdev2", 00:14:08.931 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:08.931 "is_configured": true, 00:14:08.931 "data_offset": 0, 00:14:08.931 "data_size": 65536 00:14:08.931 }, 00:14:08.931 { 00:14:08.931 "name": "BaseBdev3", 00:14:08.931 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:08.931 "is_configured": true, 00:14:08.931 "data_offset": 0, 00:14:08.931 "data_size": 65536 00:14:08.931 } 00:14:08.931 ] 00:14:08.931 }' 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.931 13:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.883 "name": "raid_bdev1", 00:14:09.883 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:09.883 "strip_size_kb": 64, 00:14:09.883 "state": "online", 00:14:09.883 "raid_level": "raid5f", 00:14:09.883 "superblock": false, 00:14:09.883 "num_base_bdevs": 3, 00:14:09.883 "num_base_bdevs_discovered": 3, 00:14:09.883 "num_base_bdevs_operational": 3, 00:14:09.883 "process": { 00:14:09.883 "type": "rebuild", 00:14:09.883 "target": "spare", 00:14:09.883 "progress": { 00:14:09.883 "blocks": 116736, 00:14:09.883 "percent": 89 00:14:09.883 } 00:14:09.883 }, 00:14:09.883 "base_bdevs_list": [ 00:14:09.883 { 00:14:09.883 "name": "spare", 00:14:09.883 "uuid": "f71b9c06-6a7c-5645-a119-20b1deb68146", 00:14:09.883 "is_configured": true, 00:14:09.883 "data_offset": 0, 00:14:09.883 "data_size": 65536 00:14:09.883 }, 00:14:09.883 { 00:14:09.883 "name": "BaseBdev2", 00:14:09.883 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:09.883 "is_configured": true, 00:14:09.883 "data_offset": 0, 00:14:09.883 "data_size": 65536 00:14:09.883 }, 00:14:09.883 { 00:14:09.883 "name": "BaseBdev3", 00:14:09.883 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:09.883 "is_configured": true, 00:14:09.883 "data_offset": 0, 00:14:09.883 "data_size": 65536 00:14:09.883 } 00:14:09.883 ] 00:14:09.883 }' 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.883 13:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:10.449 [2024-11-20 13:27:52.041144] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:10.449 [2024-11-20 13:27:52.041252] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:10.449 [2024-11-20 13:27:52.041314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.017 "name": "raid_bdev1", 00:14:11.017 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:11.017 "strip_size_kb": 64, 00:14:11.017 "state": "online", 00:14:11.017 "raid_level": "raid5f", 00:14:11.017 "superblock": false, 00:14:11.017 "num_base_bdevs": 3, 00:14:11.017 "num_base_bdevs_discovered": 3, 00:14:11.017 "num_base_bdevs_operational": 3, 00:14:11.017 "base_bdevs_list": [ 00:14:11.017 { 00:14:11.017 "name": "spare", 00:14:11.017 "uuid": "f71b9c06-6a7c-5645-a119-20b1deb68146", 00:14:11.017 "is_configured": true, 00:14:11.017 "data_offset": 0, 00:14:11.017 "data_size": 65536 00:14:11.017 }, 00:14:11.017 { 00:14:11.017 "name": "BaseBdev2", 00:14:11.017 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:11.017 "is_configured": true, 00:14:11.017 "data_offset": 0, 00:14:11.017 "data_size": 65536 00:14:11.017 }, 00:14:11.017 { 00:14:11.017 "name": "BaseBdev3", 00:14:11.017 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:11.017 "is_configured": true, 00:14:11.017 "data_offset": 0, 00:14:11.017 "data_size": 65536 00:14:11.017 } 00:14:11.017 ] 00:14:11.017 }' 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.017 13:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.275 "name": "raid_bdev1", 00:14:11.275 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:11.275 "strip_size_kb": 64, 00:14:11.275 "state": "online", 00:14:11.275 "raid_level": "raid5f", 00:14:11.275 "superblock": false, 00:14:11.275 "num_base_bdevs": 3, 00:14:11.275 "num_base_bdevs_discovered": 3, 00:14:11.275 "num_base_bdevs_operational": 3, 00:14:11.275 "base_bdevs_list": [ 00:14:11.275 { 00:14:11.275 "name": "spare", 00:14:11.275 "uuid": "f71b9c06-6a7c-5645-a119-20b1deb68146", 00:14:11.275 "is_configured": true, 00:14:11.275 "data_offset": 0, 00:14:11.275 "data_size": 65536 00:14:11.275 }, 00:14:11.275 { 00:14:11.275 "name": "BaseBdev2", 00:14:11.275 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:11.275 "is_configured": true, 00:14:11.275 "data_offset": 0, 00:14:11.275 "data_size": 65536 00:14:11.275 }, 00:14:11.275 { 00:14:11.275 "name": "BaseBdev3", 00:14:11.275 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:11.275 "is_configured": true, 00:14:11.275 "data_offset": 0, 00:14:11.275 "data_size": 65536 00:14:11.275 } 00:14:11.275 ] 00:14:11.275 }' 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.275 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.276 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.276 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.276 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.276 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.276 13:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.276 13:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.276 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.276 13:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.276 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.276 "name": "raid_bdev1", 00:14:11.276 "uuid": "a72a9293-c2c1-4697-ab90-6f39498226cf", 00:14:11.276 "strip_size_kb": 64, 00:14:11.276 "state": "online", 00:14:11.276 "raid_level": "raid5f", 00:14:11.276 "superblock": false, 00:14:11.276 "num_base_bdevs": 3, 00:14:11.276 "num_base_bdevs_discovered": 3, 00:14:11.276 "num_base_bdevs_operational": 3, 00:14:11.276 "base_bdevs_list": [ 00:14:11.276 { 00:14:11.276 "name": "spare", 00:14:11.276 "uuid": "f71b9c06-6a7c-5645-a119-20b1deb68146", 00:14:11.276 "is_configured": true, 00:14:11.276 "data_offset": 0, 00:14:11.276 "data_size": 65536 00:14:11.276 }, 00:14:11.276 { 00:14:11.276 "name": "BaseBdev2", 00:14:11.276 "uuid": "be51259f-d9dd-5e61-9f96-f35ab93cf6a0", 00:14:11.276 "is_configured": true, 00:14:11.276 "data_offset": 0, 00:14:11.276 "data_size": 65536 00:14:11.276 }, 00:14:11.276 { 00:14:11.276 "name": "BaseBdev3", 00:14:11.276 "uuid": "a10a6ff6-1a8e-53be-8985-912f1a2557d3", 00:14:11.276 "is_configured": true, 00:14:11.276 "data_offset": 0, 00:14:11.276 "data_size": 65536 00:14:11.276 } 00:14:11.276 ] 00:14:11.276 }' 00:14:11.276 13:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.276 13:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.843 [2024-11-20 13:27:53.261507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.843 [2024-11-20 13:27:53.261648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.843 [2024-11-20 13:27:53.261780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.843 [2024-11-20 13:27:53.261891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.843 [2024-11-20 13:27:53.261905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.843 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.844 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:12.103 /dev/nbd0 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.103 1+0 records in 00:14:12.103 1+0 records out 00:14:12.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386662 s, 10.6 MB/s 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:12.103 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:12.362 /dev/nbd1 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.362 1+0 records in 00:14:12.362 1+0 records out 00:14:12.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576033 s, 7.1 MB/s 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:12.362 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.363 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:12.363 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:12.363 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:12.363 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.363 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:12.363 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:12.363 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:12.363 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.363 13:27:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:12.621 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:12.621 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:12.621 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:12.621 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.621 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.621 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:12.621 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:12.621 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.621 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.621 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:12.880 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:12.880 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:12.880 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:12.880 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.880 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.880 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 91842 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 91842 ']' 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 91842 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91842 00:14:12.881 killing process with pid 91842 00:14:12.881 Received shutdown signal, test time was about 60.000000 seconds 00:14:12.881 00:14:12.881 Latency(us) 00:14:12.881 [2024-11-20T13:27:54.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.881 [2024-11-20T13:27:54.549Z] =================================================================================================================== 00:14:12.881 [2024-11-20T13:27:54.549Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91842' 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 91842 00:14:12.881 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 91842 00:14:12.881 [2024-11-20 13:27:54.511784] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.139 [2024-11-20 13:27:54.555185] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.139 13:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:13.139 00:14:13.139 real 0m14.125s 00:14:13.139 user 0m17.806s 00:14:13.139 sys 0m2.155s 00:14:13.139 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.139 ************************************ 00:14:13.139 END TEST raid5f_rebuild_test 00:14:13.139 ************************************ 00:14:13.139 13:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.139 13:27:54 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:13.139 13:27:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:13.139 13:27:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.139 13:27:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.398 ************************************ 00:14:13.398 START TEST raid5f_rebuild_test_sb 00:14:13.398 ************************************ 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:13.398 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92266 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92266 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 92266 ']' 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.399 13:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:13.399 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:13.399 Zero copy mechanism will not be used. 00:14:13.399 [2024-11-20 13:27:54.911631] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:13.399 [2024-11-20 13:27:54.911806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92266 ] 00:14:13.679 [2024-11-20 13:27:55.068849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.679 [2024-11-20 13:27:55.101562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.679 [2024-11-20 13:27:55.148156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.679 [2024-11-20 13:27:55.148302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.245 BaseBdev1_malloc 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.245 [2024-11-20 13:27:55.890019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:14.245 [2024-11-20 13:27:55.890099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.245 [2024-11-20 13:27:55.890134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:14.245 [2024-11-20 13:27:55.890157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.245 [2024-11-20 13:27:55.892850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.245 [2024-11-20 13:27:55.892911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:14.245 BaseBdev1 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.245 BaseBdev2_malloc 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.245 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.504 [2024-11-20 13:27:55.916161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:14.504 [2024-11-20 13:27:55.916250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.504 [2024-11-20 13:27:55.916279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:14.504 [2024-11-20 13:27:55.916291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.504 [2024-11-20 13:27:55.918963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.504 [2024-11-20 13:27:55.919033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:14.504 BaseBdev2 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.504 BaseBdev3_malloc 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.504 [2024-11-20 13:27:55.945789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:14.504 [2024-11-20 13:27:55.945878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.504 [2024-11-20 13:27:55.945908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:14.504 [2024-11-20 13:27:55.945918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.504 [2024-11-20 13:27:55.948473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.504 [2024-11-20 13:27:55.948524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:14.504 BaseBdev3 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.504 spare_malloc 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:14.504 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.505 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.505 spare_delay 00:14:14.505 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.505 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:14.505 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.505 13:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.505 [2024-11-20 13:27:55.996937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:14.505 [2024-11-20 13:27:55.997043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.505 [2024-11-20 13:27:55.997085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:14.505 [2024-11-20 13:27:55.997096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.505 [2024-11-20 13:27:55.999771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.505 spare 00:14:14.505 [2024-11-20 13:27:55.999912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.505 [2024-11-20 13:27:56.009030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.505 [2024-11-20 13:27:56.011249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.505 [2024-11-20 13:27:56.011328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.505 [2024-11-20 13:27:56.011539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:14.505 [2024-11-20 13:27:56.011558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:14.505 [2024-11-20 13:27:56.011920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:14.505 [2024-11-20 13:27:56.012559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:14.505 [2024-11-20 13:27:56.012582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:14.505 [2024-11-20 13:27:56.012881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.505 "name": "raid_bdev1", 00:14:14.505 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:14.505 "strip_size_kb": 64, 00:14:14.505 "state": "online", 00:14:14.505 "raid_level": "raid5f", 00:14:14.505 "superblock": true, 00:14:14.505 "num_base_bdevs": 3, 00:14:14.505 "num_base_bdevs_discovered": 3, 00:14:14.505 "num_base_bdevs_operational": 3, 00:14:14.505 "base_bdevs_list": [ 00:14:14.505 { 00:14:14.505 "name": "BaseBdev1", 00:14:14.505 "uuid": "ea1ec451-1df5-5c8a-b40e-e4dcf7d6e766", 00:14:14.505 "is_configured": true, 00:14:14.505 "data_offset": 2048, 00:14:14.505 "data_size": 63488 00:14:14.505 }, 00:14:14.505 { 00:14:14.505 "name": "BaseBdev2", 00:14:14.505 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:14.505 "is_configured": true, 00:14:14.505 "data_offset": 2048, 00:14:14.505 "data_size": 63488 00:14:14.505 }, 00:14:14.505 { 00:14:14.505 "name": "BaseBdev3", 00:14:14.505 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:14.505 "is_configured": true, 00:14:14.505 "data_offset": 2048, 00:14:14.505 "data_size": 63488 00:14:14.505 } 00:14:14.505 ] 00:14:14.505 }' 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.505 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.072 [2024-11-20 13:27:56.500601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:15.072 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:15.073 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:15.073 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:15.073 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.073 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:15.073 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:15.073 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:15.073 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:15.073 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:15.073 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:15.073 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.073 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:15.331 [2024-11-20 13:27:56.847806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:15.331 /dev/nbd0 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.331 1+0 records in 00:14:15.331 1+0 records out 00:14:15.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481586 s, 8.5 MB/s 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:15.331 13:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:15.897 496+0 records in 00:14:15.897 496+0 records out 00:14:15.897 65011712 bytes (65 MB, 62 MiB) copied, 0.450714 s, 144 MB/s 00:14:15.897 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:15.897 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.897 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:15.897 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.897 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:15.897 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.897 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:16.155 [2024-11-20 13:27:57.610491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.155 [2024-11-20 13:27:57.624052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.155 "name": "raid_bdev1", 00:14:16.155 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:16.155 "strip_size_kb": 64, 00:14:16.155 "state": "online", 00:14:16.155 "raid_level": "raid5f", 00:14:16.155 "superblock": true, 00:14:16.155 "num_base_bdevs": 3, 00:14:16.155 "num_base_bdevs_discovered": 2, 00:14:16.155 "num_base_bdevs_operational": 2, 00:14:16.155 "base_bdevs_list": [ 00:14:16.155 { 00:14:16.155 "name": null, 00:14:16.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.155 "is_configured": false, 00:14:16.155 "data_offset": 0, 00:14:16.155 "data_size": 63488 00:14:16.155 }, 00:14:16.155 { 00:14:16.155 "name": "BaseBdev2", 00:14:16.155 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:16.155 "is_configured": true, 00:14:16.155 "data_offset": 2048, 00:14:16.155 "data_size": 63488 00:14:16.155 }, 00:14:16.155 { 00:14:16.155 "name": "BaseBdev3", 00:14:16.155 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:16.155 "is_configured": true, 00:14:16.155 "data_offset": 2048, 00:14:16.155 "data_size": 63488 00:14:16.155 } 00:14:16.155 ] 00:14:16.155 }' 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.155 13:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.720 13:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.720 13:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.720 13:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.720 [2024-11-20 13:27:58.111757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.720 [2024-11-20 13:27:58.117066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:14:16.720 13:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.720 13:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:16.720 [2024-11-20 13:27:58.119946] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.736 "name": "raid_bdev1", 00:14:17.736 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:17.736 "strip_size_kb": 64, 00:14:17.736 "state": "online", 00:14:17.736 "raid_level": "raid5f", 00:14:17.736 "superblock": true, 00:14:17.736 "num_base_bdevs": 3, 00:14:17.736 "num_base_bdevs_discovered": 3, 00:14:17.736 "num_base_bdevs_operational": 3, 00:14:17.736 "process": { 00:14:17.736 "type": "rebuild", 00:14:17.736 "target": "spare", 00:14:17.736 "progress": { 00:14:17.736 "blocks": 20480, 00:14:17.736 "percent": 16 00:14:17.736 } 00:14:17.736 }, 00:14:17.736 "base_bdevs_list": [ 00:14:17.736 { 00:14:17.736 "name": "spare", 00:14:17.736 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:17.736 "is_configured": true, 00:14:17.736 "data_offset": 2048, 00:14:17.736 "data_size": 63488 00:14:17.736 }, 00:14:17.736 { 00:14:17.736 "name": "BaseBdev2", 00:14:17.736 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:17.736 "is_configured": true, 00:14:17.736 "data_offset": 2048, 00:14:17.736 "data_size": 63488 00:14:17.736 }, 00:14:17.736 { 00:14:17.736 "name": "BaseBdev3", 00:14:17.736 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:17.736 "is_configured": true, 00:14:17.736 "data_offset": 2048, 00:14:17.736 "data_size": 63488 00:14:17.736 } 00:14:17.736 ] 00:14:17.736 }' 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.736 [2024-11-20 13:27:59.277143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:17.736 [2024-11-20 13:27:59.332543] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:17.736 [2024-11-20 13:27:59.332781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.736 [2024-11-20 13:27:59.332807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:17.736 [2024-11-20 13:27:59.332829] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.736 "name": "raid_bdev1", 00:14:17.736 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:17.736 "strip_size_kb": 64, 00:14:17.736 "state": "online", 00:14:17.736 "raid_level": "raid5f", 00:14:17.736 "superblock": true, 00:14:17.736 "num_base_bdevs": 3, 00:14:17.736 "num_base_bdevs_discovered": 2, 00:14:17.736 "num_base_bdevs_operational": 2, 00:14:17.736 "base_bdevs_list": [ 00:14:17.736 { 00:14:17.736 "name": null, 00:14:17.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.736 "is_configured": false, 00:14:17.736 "data_offset": 0, 00:14:17.736 "data_size": 63488 00:14:17.736 }, 00:14:17.736 { 00:14:17.736 "name": "BaseBdev2", 00:14:17.736 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:17.736 "is_configured": true, 00:14:17.736 "data_offset": 2048, 00:14:17.736 "data_size": 63488 00:14:17.736 }, 00:14:17.736 { 00:14:17.736 "name": "BaseBdev3", 00:14:17.736 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:17.736 "is_configured": true, 00:14:17.736 "data_offset": 2048, 00:14:17.736 "data_size": 63488 00:14:17.736 } 00:14:17.736 ] 00:14:17.736 }' 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.736 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.302 "name": "raid_bdev1", 00:14:18.302 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:18.302 "strip_size_kb": 64, 00:14:18.302 "state": "online", 00:14:18.302 "raid_level": "raid5f", 00:14:18.302 "superblock": true, 00:14:18.302 "num_base_bdevs": 3, 00:14:18.302 "num_base_bdevs_discovered": 2, 00:14:18.302 "num_base_bdevs_operational": 2, 00:14:18.302 "base_bdevs_list": [ 00:14:18.302 { 00:14:18.302 "name": null, 00:14:18.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.302 "is_configured": false, 00:14:18.302 "data_offset": 0, 00:14:18.302 "data_size": 63488 00:14:18.302 }, 00:14:18.302 { 00:14:18.302 "name": "BaseBdev2", 00:14:18.302 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:18.302 "is_configured": true, 00:14:18.302 "data_offset": 2048, 00:14:18.302 "data_size": 63488 00:14:18.302 }, 00:14:18.302 { 00:14:18.302 "name": "BaseBdev3", 00:14:18.302 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:18.302 "is_configured": true, 00:14:18.302 "data_offset": 2048, 00:14:18.302 "data_size": 63488 00:14:18.302 } 00:14:18.302 ] 00:14:18.302 }' 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.302 [2024-11-20 13:27:59.931726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.302 [2024-11-20 13:27:59.936867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.302 13:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:18.302 [2024-11-20 13:27:59.939610] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.679 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.679 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.679 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.679 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.679 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.679 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.679 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.679 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.679 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.679 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.679 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.679 "name": "raid_bdev1", 00:14:19.679 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:19.679 "strip_size_kb": 64, 00:14:19.679 "state": "online", 00:14:19.679 "raid_level": "raid5f", 00:14:19.679 "superblock": true, 00:14:19.679 "num_base_bdevs": 3, 00:14:19.679 "num_base_bdevs_discovered": 3, 00:14:19.679 "num_base_bdevs_operational": 3, 00:14:19.679 "process": { 00:14:19.679 "type": "rebuild", 00:14:19.679 "target": "spare", 00:14:19.679 "progress": { 00:14:19.679 "blocks": 20480, 00:14:19.679 "percent": 16 00:14:19.679 } 00:14:19.679 }, 00:14:19.679 "base_bdevs_list": [ 00:14:19.679 { 00:14:19.679 "name": "spare", 00:14:19.679 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:19.679 "is_configured": true, 00:14:19.679 "data_offset": 2048, 00:14:19.679 "data_size": 63488 00:14:19.679 }, 00:14:19.679 { 00:14:19.679 "name": "BaseBdev2", 00:14:19.679 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:19.679 "is_configured": true, 00:14:19.679 "data_offset": 2048, 00:14:19.679 "data_size": 63488 00:14:19.679 }, 00:14:19.679 { 00:14:19.679 "name": "BaseBdev3", 00:14:19.679 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:19.680 "is_configured": true, 00:14:19.680 "data_offset": 2048, 00:14:19.680 "data_size": 63488 00:14:19.680 } 00:14:19.680 ] 00:14:19.680 }' 00:14:19.680 13:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:19.680 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=470 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.680 "name": "raid_bdev1", 00:14:19.680 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:19.680 "strip_size_kb": 64, 00:14:19.680 "state": "online", 00:14:19.680 "raid_level": "raid5f", 00:14:19.680 "superblock": true, 00:14:19.680 "num_base_bdevs": 3, 00:14:19.680 "num_base_bdevs_discovered": 3, 00:14:19.680 "num_base_bdevs_operational": 3, 00:14:19.680 "process": { 00:14:19.680 "type": "rebuild", 00:14:19.680 "target": "spare", 00:14:19.680 "progress": { 00:14:19.680 "blocks": 22528, 00:14:19.680 "percent": 17 00:14:19.680 } 00:14:19.680 }, 00:14:19.680 "base_bdevs_list": [ 00:14:19.680 { 00:14:19.680 "name": "spare", 00:14:19.680 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:19.680 "is_configured": true, 00:14:19.680 "data_offset": 2048, 00:14:19.680 "data_size": 63488 00:14:19.680 }, 00:14:19.680 { 00:14:19.680 "name": "BaseBdev2", 00:14:19.680 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:19.680 "is_configured": true, 00:14:19.680 "data_offset": 2048, 00:14:19.680 "data_size": 63488 00:14:19.680 }, 00:14:19.680 { 00:14:19.680 "name": "BaseBdev3", 00:14:19.680 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:19.680 "is_configured": true, 00:14:19.680 "data_offset": 2048, 00:14:19.680 "data_size": 63488 00:14:19.680 } 00:14:19.680 ] 00:14:19.680 }' 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.680 13:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.617 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.617 "name": "raid_bdev1", 00:14:20.617 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:20.617 "strip_size_kb": 64, 00:14:20.617 "state": "online", 00:14:20.617 "raid_level": "raid5f", 00:14:20.617 "superblock": true, 00:14:20.617 "num_base_bdevs": 3, 00:14:20.617 "num_base_bdevs_discovered": 3, 00:14:20.617 "num_base_bdevs_operational": 3, 00:14:20.617 "process": { 00:14:20.617 "type": "rebuild", 00:14:20.617 "target": "spare", 00:14:20.617 "progress": { 00:14:20.617 "blocks": 45056, 00:14:20.617 "percent": 35 00:14:20.617 } 00:14:20.617 }, 00:14:20.617 "base_bdevs_list": [ 00:14:20.617 { 00:14:20.617 "name": "spare", 00:14:20.617 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:20.617 "is_configured": true, 00:14:20.617 "data_offset": 2048, 00:14:20.617 "data_size": 63488 00:14:20.617 }, 00:14:20.617 { 00:14:20.617 "name": "BaseBdev2", 00:14:20.617 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:20.617 "is_configured": true, 00:14:20.617 "data_offset": 2048, 00:14:20.617 "data_size": 63488 00:14:20.617 }, 00:14:20.617 { 00:14:20.617 "name": "BaseBdev3", 00:14:20.617 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:20.617 "is_configured": true, 00:14:20.617 "data_offset": 2048, 00:14:20.617 "data_size": 63488 00:14:20.617 } 00:14:20.617 ] 00:14:20.617 }' 00:14:20.877 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.877 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.877 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.877 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.877 13:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.812 "name": "raid_bdev1", 00:14:21.812 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:21.812 "strip_size_kb": 64, 00:14:21.812 "state": "online", 00:14:21.812 "raid_level": "raid5f", 00:14:21.812 "superblock": true, 00:14:21.812 "num_base_bdevs": 3, 00:14:21.812 "num_base_bdevs_discovered": 3, 00:14:21.812 "num_base_bdevs_operational": 3, 00:14:21.812 "process": { 00:14:21.812 "type": "rebuild", 00:14:21.812 "target": "spare", 00:14:21.812 "progress": { 00:14:21.812 "blocks": 69632, 00:14:21.812 "percent": 54 00:14:21.812 } 00:14:21.812 }, 00:14:21.812 "base_bdevs_list": [ 00:14:21.812 { 00:14:21.812 "name": "spare", 00:14:21.812 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:21.812 "is_configured": true, 00:14:21.812 "data_offset": 2048, 00:14:21.812 "data_size": 63488 00:14:21.812 }, 00:14:21.812 { 00:14:21.812 "name": "BaseBdev2", 00:14:21.812 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:21.812 "is_configured": true, 00:14:21.812 "data_offset": 2048, 00:14:21.812 "data_size": 63488 00:14:21.812 }, 00:14:21.812 { 00:14:21.812 "name": "BaseBdev3", 00:14:21.812 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:21.812 "is_configured": true, 00:14:21.812 "data_offset": 2048, 00:14:21.812 "data_size": 63488 00:14:21.812 } 00:14:21.812 ] 00:14:21.812 }' 00:14:21.812 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.070 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.070 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.070 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.070 13:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.004 "name": "raid_bdev1", 00:14:23.004 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:23.004 "strip_size_kb": 64, 00:14:23.004 "state": "online", 00:14:23.004 "raid_level": "raid5f", 00:14:23.004 "superblock": true, 00:14:23.004 "num_base_bdevs": 3, 00:14:23.004 "num_base_bdevs_discovered": 3, 00:14:23.004 "num_base_bdevs_operational": 3, 00:14:23.004 "process": { 00:14:23.004 "type": "rebuild", 00:14:23.004 "target": "spare", 00:14:23.004 "progress": { 00:14:23.004 "blocks": 92160, 00:14:23.004 "percent": 72 00:14:23.004 } 00:14:23.004 }, 00:14:23.004 "base_bdevs_list": [ 00:14:23.004 { 00:14:23.004 "name": "spare", 00:14:23.004 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:23.004 "is_configured": true, 00:14:23.004 "data_offset": 2048, 00:14:23.004 "data_size": 63488 00:14:23.004 }, 00:14:23.004 { 00:14:23.004 "name": "BaseBdev2", 00:14:23.004 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:23.004 "is_configured": true, 00:14:23.004 "data_offset": 2048, 00:14:23.004 "data_size": 63488 00:14:23.004 }, 00:14:23.004 { 00:14:23.004 "name": "BaseBdev3", 00:14:23.004 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:23.004 "is_configured": true, 00:14:23.004 "data_offset": 2048, 00:14:23.004 "data_size": 63488 00:14:23.004 } 00:14:23.004 ] 00:14:23.004 }' 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.004 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.262 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.262 13:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.198 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.198 "name": "raid_bdev1", 00:14:24.198 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:24.198 "strip_size_kb": 64, 00:14:24.198 "state": "online", 00:14:24.198 "raid_level": "raid5f", 00:14:24.198 "superblock": true, 00:14:24.198 "num_base_bdevs": 3, 00:14:24.198 "num_base_bdevs_discovered": 3, 00:14:24.198 "num_base_bdevs_operational": 3, 00:14:24.198 "process": { 00:14:24.198 "type": "rebuild", 00:14:24.198 "target": "spare", 00:14:24.198 "progress": { 00:14:24.198 "blocks": 114688, 00:14:24.198 "percent": 90 00:14:24.198 } 00:14:24.198 }, 00:14:24.198 "base_bdevs_list": [ 00:14:24.198 { 00:14:24.199 "name": "spare", 00:14:24.199 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:24.199 "is_configured": true, 00:14:24.199 "data_offset": 2048, 00:14:24.199 "data_size": 63488 00:14:24.199 }, 00:14:24.199 { 00:14:24.199 "name": "BaseBdev2", 00:14:24.199 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:24.199 "is_configured": true, 00:14:24.199 "data_offset": 2048, 00:14:24.199 "data_size": 63488 00:14:24.199 }, 00:14:24.199 { 00:14:24.199 "name": "BaseBdev3", 00:14:24.199 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:24.199 "is_configured": true, 00:14:24.199 "data_offset": 2048, 00:14:24.199 "data_size": 63488 00:14:24.199 } 00:14:24.199 ] 00:14:24.199 }' 00:14:24.199 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.199 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.199 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.199 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.199 13:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.765 [2024-11-20 13:28:06.212573] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:24.765 [2024-11-20 13:28:06.212721] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:24.765 [2024-11-20 13:28:06.212981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.331 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.331 "name": "raid_bdev1", 00:14:25.331 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:25.331 "strip_size_kb": 64, 00:14:25.331 "state": "online", 00:14:25.331 "raid_level": "raid5f", 00:14:25.331 "superblock": true, 00:14:25.331 "num_base_bdevs": 3, 00:14:25.331 "num_base_bdevs_discovered": 3, 00:14:25.331 "num_base_bdevs_operational": 3, 00:14:25.331 "base_bdevs_list": [ 00:14:25.331 { 00:14:25.331 "name": "spare", 00:14:25.331 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:25.331 "is_configured": true, 00:14:25.331 "data_offset": 2048, 00:14:25.331 "data_size": 63488 00:14:25.331 }, 00:14:25.331 { 00:14:25.331 "name": "BaseBdev2", 00:14:25.331 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:25.331 "is_configured": true, 00:14:25.331 "data_offset": 2048, 00:14:25.331 "data_size": 63488 00:14:25.331 }, 00:14:25.331 { 00:14:25.331 "name": "BaseBdev3", 00:14:25.332 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:25.332 "is_configured": true, 00:14:25.332 "data_offset": 2048, 00:14:25.332 "data_size": 63488 00:14:25.332 } 00:14:25.332 ] 00:14:25.332 }' 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.332 13:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.589 "name": "raid_bdev1", 00:14:25.589 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:25.589 "strip_size_kb": 64, 00:14:25.589 "state": "online", 00:14:25.589 "raid_level": "raid5f", 00:14:25.589 "superblock": true, 00:14:25.589 "num_base_bdevs": 3, 00:14:25.589 "num_base_bdevs_discovered": 3, 00:14:25.589 "num_base_bdevs_operational": 3, 00:14:25.589 "base_bdevs_list": [ 00:14:25.589 { 00:14:25.589 "name": "spare", 00:14:25.589 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:25.589 "is_configured": true, 00:14:25.589 "data_offset": 2048, 00:14:25.589 "data_size": 63488 00:14:25.589 }, 00:14:25.589 { 00:14:25.589 "name": "BaseBdev2", 00:14:25.589 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:25.589 "is_configured": true, 00:14:25.589 "data_offset": 2048, 00:14:25.589 "data_size": 63488 00:14:25.589 }, 00:14:25.589 { 00:14:25.589 "name": "BaseBdev3", 00:14:25.589 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:25.589 "is_configured": true, 00:14:25.589 "data_offset": 2048, 00:14:25.589 "data_size": 63488 00:14:25.589 } 00:14:25.589 ] 00:14:25.589 }' 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.589 "name": "raid_bdev1", 00:14:25.589 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:25.589 "strip_size_kb": 64, 00:14:25.589 "state": "online", 00:14:25.589 "raid_level": "raid5f", 00:14:25.589 "superblock": true, 00:14:25.589 "num_base_bdevs": 3, 00:14:25.589 "num_base_bdevs_discovered": 3, 00:14:25.589 "num_base_bdevs_operational": 3, 00:14:25.589 "base_bdevs_list": [ 00:14:25.589 { 00:14:25.589 "name": "spare", 00:14:25.589 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:25.589 "is_configured": true, 00:14:25.589 "data_offset": 2048, 00:14:25.589 "data_size": 63488 00:14:25.589 }, 00:14:25.589 { 00:14:25.589 "name": "BaseBdev2", 00:14:25.589 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:25.589 "is_configured": true, 00:14:25.589 "data_offset": 2048, 00:14:25.589 "data_size": 63488 00:14:25.589 }, 00:14:25.589 { 00:14:25.589 "name": "BaseBdev3", 00:14:25.589 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:25.589 "is_configured": true, 00:14:25.589 "data_offset": 2048, 00:14:25.589 "data_size": 63488 00:14:25.589 } 00:14:25.589 ] 00:14:25.589 }' 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.589 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.155 [2024-11-20 13:28:07.608900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:26.155 [2024-11-20 13:28:07.609146] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.155 [2024-11-20 13:28:07.609331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.155 [2024-11-20 13:28:07.609537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.155 [2024-11-20 13:28:07.609607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.155 13:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:26.413 /dev/nbd0 00:14:26.413 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:26.413 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:26.413 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.414 1+0 records in 00:14:26.414 1+0 records out 00:14:26.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476748 s, 8.6 MB/s 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.414 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:26.672 /dev/nbd1 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.672 1+0 records in 00:14:26.672 1+0 records out 00:14:26.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453261 s, 9.0 MB/s 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:26.672 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.930 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:26.930 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:26.930 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.930 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.930 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:26.930 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:26.931 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.931 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.931 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.931 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:26.931 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.931 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:27.189 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:27.189 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:27.189 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:27.189 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:27.189 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:27.189 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:27.189 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:27.189 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:27.189 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:27.189 13:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.448 [2024-11-20 13:28:09.037461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:27.448 [2024-11-20 13:28:09.037562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.448 [2024-11-20 13:28:09.037595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:27.448 [2024-11-20 13:28:09.037610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.448 [2024-11-20 13:28:09.040420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.448 [2024-11-20 13:28:09.040488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:27.448 [2024-11-20 13:28:09.040634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:27.448 [2024-11-20 13:28:09.040720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.448 [2024-11-20 13:28:09.040941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.448 [2024-11-20 13:28:09.041142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.448 spare 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.448 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.707 [2024-11-20 13:28:09.141111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:14:27.707 [2024-11-20 13:28:09.141198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:27.707 [2024-11-20 13:28:09.141640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:14:27.707 [2024-11-20 13:28:09.142259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:14:27.707 [2024-11-20 13:28:09.142302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:14:27.707 [2024-11-20 13:28:09.142593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.707 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.707 "name": "raid_bdev1", 00:14:27.707 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:27.707 "strip_size_kb": 64, 00:14:27.707 "state": "online", 00:14:27.707 "raid_level": "raid5f", 00:14:27.707 "superblock": true, 00:14:27.707 "num_base_bdevs": 3, 00:14:27.707 "num_base_bdevs_discovered": 3, 00:14:27.707 "num_base_bdevs_operational": 3, 00:14:27.707 "base_bdevs_list": [ 00:14:27.707 { 00:14:27.707 "name": "spare", 00:14:27.707 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:27.707 "is_configured": true, 00:14:27.708 "data_offset": 2048, 00:14:27.708 "data_size": 63488 00:14:27.708 }, 00:14:27.708 { 00:14:27.708 "name": "BaseBdev2", 00:14:27.708 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:27.708 "is_configured": true, 00:14:27.708 "data_offset": 2048, 00:14:27.708 "data_size": 63488 00:14:27.708 }, 00:14:27.708 { 00:14:27.708 "name": "BaseBdev3", 00:14:27.708 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:27.708 "is_configured": true, 00:14:27.708 "data_offset": 2048, 00:14:27.708 "data_size": 63488 00:14:27.708 } 00:14:27.708 ] 00:14:27.708 }' 00:14:27.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.708 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.965 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.965 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.966 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.223 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.223 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.223 "name": "raid_bdev1", 00:14:28.223 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:28.223 "strip_size_kb": 64, 00:14:28.223 "state": "online", 00:14:28.223 "raid_level": "raid5f", 00:14:28.223 "superblock": true, 00:14:28.223 "num_base_bdevs": 3, 00:14:28.223 "num_base_bdevs_discovered": 3, 00:14:28.223 "num_base_bdevs_operational": 3, 00:14:28.223 "base_bdevs_list": [ 00:14:28.223 { 00:14:28.223 "name": "spare", 00:14:28.223 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:28.223 "is_configured": true, 00:14:28.223 "data_offset": 2048, 00:14:28.223 "data_size": 63488 00:14:28.223 }, 00:14:28.224 { 00:14:28.224 "name": "BaseBdev2", 00:14:28.224 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:28.224 "is_configured": true, 00:14:28.224 "data_offset": 2048, 00:14:28.224 "data_size": 63488 00:14:28.224 }, 00:14:28.224 { 00:14:28.224 "name": "BaseBdev3", 00:14:28.224 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:28.224 "is_configured": true, 00:14:28.224 "data_offset": 2048, 00:14:28.224 "data_size": 63488 00:14:28.224 } 00:14:28.224 ] 00:14:28.224 }' 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.224 [2024-11-20 13:28:09.838220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.224 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.481 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.481 "name": "raid_bdev1", 00:14:28.481 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:28.481 "strip_size_kb": 64, 00:14:28.481 "state": "online", 00:14:28.481 "raid_level": "raid5f", 00:14:28.481 "superblock": true, 00:14:28.481 "num_base_bdevs": 3, 00:14:28.481 "num_base_bdevs_discovered": 2, 00:14:28.481 "num_base_bdevs_operational": 2, 00:14:28.481 "base_bdevs_list": [ 00:14:28.481 { 00:14:28.481 "name": null, 00:14:28.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.481 "is_configured": false, 00:14:28.482 "data_offset": 0, 00:14:28.482 "data_size": 63488 00:14:28.482 }, 00:14:28.482 { 00:14:28.482 "name": "BaseBdev2", 00:14:28.482 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:28.482 "is_configured": true, 00:14:28.482 "data_offset": 2048, 00:14:28.482 "data_size": 63488 00:14:28.482 }, 00:14:28.482 { 00:14:28.482 "name": "BaseBdev3", 00:14:28.482 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:28.482 "is_configured": true, 00:14:28.482 "data_offset": 2048, 00:14:28.482 "data_size": 63488 00:14:28.482 } 00:14:28.482 ] 00:14:28.482 }' 00:14:28.482 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.482 13:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.739 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.739 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.739 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.739 [2024-11-20 13:28:10.329873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.739 [2024-11-20 13:28:10.330166] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:28.739 [2024-11-20 13:28:10.330188] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:28.740 [2024-11-20 13:28:10.330258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.740 [2024-11-20 13:28:10.335283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:14:28.740 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.740 13:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:28.740 [2024-11-20 13:28:10.338017] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.674 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.674 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.674 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.674 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.674 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.931 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.931 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.931 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.931 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.931 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.931 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.931 "name": "raid_bdev1", 00:14:29.931 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:29.931 "strip_size_kb": 64, 00:14:29.931 "state": "online", 00:14:29.931 "raid_level": "raid5f", 00:14:29.931 "superblock": true, 00:14:29.931 "num_base_bdevs": 3, 00:14:29.931 "num_base_bdevs_discovered": 3, 00:14:29.931 "num_base_bdevs_operational": 3, 00:14:29.931 "process": { 00:14:29.931 "type": "rebuild", 00:14:29.931 "target": "spare", 00:14:29.931 "progress": { 00:14:29.931 "blocks": 20480, 00:14:29.931 "percent": 16 00:14:29.931 } 00:14:29.931 }, 00:14:29.931 "base_bdevs_list": [ 00:14:29.931 { 00:14:29.931 "name": "spare", 00:14:29.931 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:29.931 "is_configured": true, 00:14:29.931 "data_offset": 2048, 00:14:29.931 "data_size": 63488 00:14:29.931 }, 00:14:29.931 { 00:14:29.931 "name": "BaseBdev2", 00:14:29.931 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:29.931 "is_configured": true, 00:14:29.931 "data_offset": 2048, 00:14:29.931 "data_size": 63488 00:14:29.932 }, 00:14:29.932 { 00:14:29.932 "name": "BaseBdev3", 00:14:29.932 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:29.932 "is_configured": true, 00:14:29.932 "data_offset": 2048, 00:14:29.932 "data_size": 63488 00:14:29.932 } 00:14:29.932 ] 00:14:29.932 }' 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.932 [2024-11-20 13:28:11.491217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.932 [2024-11-20 13:28:11.550905] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.932 [2024-11-20 13:28:11.551035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.932 [2024-11-20 13:28:11.551069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.932 [2024-11-20 13:28:11.551082] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.932 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.189 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.189 "name": "raid_bdev1", 00:14:30.189 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:30.189 "strip_size_kb": 64, 00:14:30.189 "state": "online", 00:14:30.189 "raid_level": "raid5f", 00:14:30.189 "superblock": true, 00:14:30.189 "num_base_bdevs": 3, 00:14:30.189 "num_base_bdevs_discovered": 2, 00:14:30.189 "num_base_bdevs_operational": 2, 00:14:30.189 "base_bdevs_list": [ 00:14:30.189 { 00:14:30.189 "name": null, 00:14:30.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.189 "is_configured": false, 00:14:30.189 "data_offset": 0, 00:14:30.189 "data_size": 63488 00:14:30.189 }, 00:14:30.189 { 00:14:30.189 "name": "BaseBdev2", 00:14:30.189 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:30.189 "is_configured": true, 00:14:30.189 "data_offset": 2048, 00:14:30.189 "data_size": 63488 00:14:30.189 }, 00:14:30.189 { 00:14:30.189 "name": "BaseBdev3", 00:14:30.189 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:30.189 "is_configured": true, 00:14:30.189 "data_offset": 2048, 00:14:30.189 "data_size": 63488 00:14:30.189 } 00:14:30.189 ] 00:14:30.189 }' 00:14:30.189 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.189 13:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.448 13:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:30.448 13:28:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.448 13:28:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.448 [2024-11-20 13:28:12.045239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:30.448 [2024-11-20 13:28:12.045340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.448 [2024-11-20 13:28:12.045374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:30.448 [2024-11-20 13:28:12.045388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.448 [2024-11-20 13:28:12.045960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.448 [2024-11-20 13:28:12.046025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:30.448 [2024-11-20 13:28:12.046166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:30.448 [2024-11-20 13:28:12.046191] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:30.448 [2024-11-20 13:28:12.046211] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:30.448 [2024-11-20 13:28:12.046252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.448 [2024-11-20 13:28:12.051381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:14:30.448 spare 00:14:30.448 13:28:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.448 13:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:30.448 [2024-11-20 13:28:12.054491] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.828 "name": "raid_bdev1", 00:14:31.828 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:31.828 "strip_size_kb": 64, 00:14:31.828 "state": "online", 00:14:31.828 "raid_level": "raid5f", 00:14:31.828 "superblock": true, 00:14:31.828 "num_base_bdevs": 3, 00:14:31.828 "num_base_bdevs_discovered": 3, 00:14:31.828 "num_base_bdevs_operational": 3, 00:14:31.828 "process": { 00:14:31.828 "type": "rebuild", 00:14:31.828 "target": "spare", 00:14:31.828 "progress": { 00:14:31.828 "blocks": 18432, 00:14:31.828 "percent": 14 00:14:31.828 } 00:14:31.828 }, 00:14:31.828 "base_bdevs_list": [ 00:14:31.828 { 00:14:31.828 "name": "spare", 00:14:31.828 "uuid": "22569fe3-f130-5f37-a8cd-e337992093ca", 00:14:31.828 "is_configured": true, 00:14:31.828 "data_offset": 2048, 00:14:31.828 "data_size": 63488 00:14:31.828 }, 00:14:31.828 { 00:14:31.828 "name": "BaseBdev2", 00:14:31.828 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:31.828 "is_configured": true, 00:14:31.828 "data_offset": 2048, 00:14:31.828 "data_size": 63488 00:14:31.828 }, 00:14:31.828 { 00:14:31.828 "name": "BaseBdev3", 00:14:31.828 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:31.828 "is_configured": true, 00:14:31.828 "data_offset": 2048, 00:14:31.828 "data_size": 63488 00:14:31.828 } 00:14:31.828 ] 00:14:31.828 }' 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.828 [2024-11-20 13:28:13.206013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.828 [2024-11-20 13:28:13.269542] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:31.828 [2024-11-20 13:28:13.269716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.828 [2024-11-20 13:28:13.269743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.828 [2024-11-20 13:28:13.269760] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.828 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.828 "name": "raid_bdev1", 00:14:31.828 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:31.828 "strip_size_kb": 64, 00:14:31.828 "state": "online", 00:14:31.828 "raid_level": "raid5f", 00:14:31.828 "superblock": true, 00:14:31.828 "num_base_bdevs": 3, 00:14:31.828 "num_base_bdevs_discovered": 2, 00:14:31.828 "num_base_bdevs_operational": 2, 00:14:31.828 "base_bdevs_list": [ 00:14:31.828 { 00:14:31.828 "name": null, 00:14:31.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.829 "is_configured": false, 00:14:31.829 "data_offset": 0, 00:14:31.829 "data_size": 63488 00:14:31.829 }, 00:14:31.829 { 00:14:31.829 "name": "BaseBdev2", 00:14:31.829 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:31.829 "is_configured": true, 00:14:31.829 "data_offset": 2048, 00:14:31.829 "data_size": 63488 00:14:31.829 }, 00:14:31.829 { 00:14:31.829 "name": "BaseBdev3", 00:14:31.829 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:31.829 "is_configured": true, 00:14:31.829 "data_offset": 2048, 00:14:31.829 "data_size": 63488 00:14:31.829 } 00:14:31.829 ] 00:14:31.829 }' 00:14:31.829 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.829 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.395 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.395 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.395 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.395 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.395 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.395 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.395 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.395 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.395 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.395 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.395 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.395 "name": "raid_bdev1", 00:14:32.395 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:32.395 "strip_size_kb": 64, 00:14:32.395 "state": "online", 00:14:32.395 "raid_level": "raid5f", 00:14:32.395 "superblock": true, 00:14:32.395 "num_base_bdevs": 3, 00:14:32.395 "num_base_bdevs_discovered": 2, 00:14:32.395 "num_base_bdevs_operational": 2, 00:14:32.395 "base_bdevs_list": [ 00:14:32.395 { 00:14:32.395 "name": null, 00:14:32.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.395 "is_configured": false, 00:14:32.395 "data_offset": 0, 00:14:32.395 "data_size": 63488 00:14:32.395 }, 00:14:32.395 { 00:14:32.396 "name": "BaseBdev2", 00:14:32.396 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:32.396 "is_configured": true, 00:14:32.396 "data_offset": 2048, 00:14:32.396 "data_size": 63488 00:14:32.396 }, 00:14:32.396 { 00:14:32.396 "name": "BaseBdev3", 00:14:32.396 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:32.396 "is_configured": true, 00:14:32.396 "data_offset": 2048, 00:14:32.396 "data_size": 63488 00:14:32.396 } 00:14:32.396 ] 00:14:32.396 }' 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.396 [2024-11-20 13:28:13.955682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:32.396 [2024-11-20 13:28:13.955794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.396 [2024-11-20 13:28:13.955831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:32.396 [2024-11-20 13:28:13.955852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.396 [2024-11-20 13:28:13.956365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.396 [2024-11-20 13:28:13.956409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:32.396 [2024-11-20 13:28:13.956508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:32.396 [2024-11-20 13:28:13.956536] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:32.396 [2024-11-20 13:28:13.956548] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:32.396 [2024-11-20 13:28:13.956564] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:32.396 BaseBdev1 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.396 13:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.331 13:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.589 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.589 "name": "raid_bdev1", 00:14:33.589 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:33.589 "strip_size_kb": 64, 00:14:33.589 "state": "online", 00:14:33.589 "raid_level": "raid5f", 00:14:33.589 "superblock": true, 00:14:33.589 "num_base_bdevs": 3, 00:14:33.589 "num_base_bdevs_discovered": 2, 00:14:33.589 "num_base_bdevs_operational": 2, 00:14:33.589 "base_bdevs_list": [ 00:14:33.589 { 00:14:33.589 "name": null, 00:14:33.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.589 "is_configured": false, 00:14:33.589 "data_offset": 0, 00:14:33.589 "data_size": 63488 00:14:33.589 }, 00:14:33.589 { 00:14:33.589 "name": "BaseBdev2", 00:14:33.589 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:33.589 "is_configured": true, 00:14:33.589 "data_offset": 2048, 00:14:33.589 "data_size": 63488 00:14:33.589 }, 00:14:33.589 { 00:14:33.589 "name": "BaseBdev3", 00:14:33.589 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:33.589 "is_configured": true, 00:14:33.589 "data_offset": 2048, 00:14:33.589 "data_size": 63488 00:14:33.589 } 00:14:33.589 ] 00:14:33.589 }' 00:14:33.589 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.589 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.848 "name": "raid_bdev1", 00:14:33.848 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:33.848 "strip_size_kb": 64, 00:14:33.848 "state": "online", 00:14:33.848 "raid_level": "raid5f", 00:14:33.848 "superblock": true, 00:14:33.848 "num_base_bdevs": 3, 00:14:33.848 "num_base_bdevs_discovered": 2, 00:14:33.848 "num_base_bdevs_operational": 2, 00:14:33.848 "base_bdevs_list": [ 00:14:33.848 { 00:14:33.848 "name": null, 00:14:33.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.848 "is_configured": false, 00:14:33.848 "data_offset": 0, 00:14:33.848 "data_size": 63488 00:14:33.848 }, 00:14:33.848 { 00:14:33.848 "name": "BaseBdev2", 00:14:33.848 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:33.848 "is_configured": true, 00:14:33.848 "data_offset": 2048, 00:14:33.848 "data_size": 63488 00:14:33.848 }, 00:14:33.848 { 00:14:33.848 "name": "BaseBdev3", 00:14:33.848 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:33.848 "is_configured": true, 00:14:33.848 "data_offset": 2048, 00:14:33.848 "data_size": 63488 00:14:33.848 } 00:14:33.848 ] 00:14:33.848 }' 00:14:33.848 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.106 [2024-11-20 13:28:15.579746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.106 [2024-11-20 13:28:15.579983] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:34.106 [2024-11-20 13:28:15.580020] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:34.106 request: 00:14:34.106 { 00:14:34.106 "base_bdev": "BaseBdev1", 00:14:34.106 "raid_bdev": "raid_bdev1", 00:14:34.106 "method": "bdev_raid_add_base_bdev", 00:14:34.106 "req_id": 1 00:14:34.106 } 00:14:34.106 Got JSON-RPC error response 00:14:34.106 response: 00:14:34.106 { 00:14:34.106 "code": -22, 00:14:34.106 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:34.106 } 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:34.106 13:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.039 "name": "raid_bdev1", 00:14:35.039 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:35.039 "strip_size_kb": 64, 00:14:35.039 "state": "online", 00:14:35.039 "raid_level": "raid5f", 00:14:35.039 "superblock": true, 00:14:35.039 "num_base_bdevs": 3, 00:14:35.039 "num_base_bdevs_discovered": 2, 00:14:35.039 "num_base_bdevs_operational": 2, 00:14:35.039 "base_bdevs_list": [ 00:14:35.039 { 00:14:35.039 "name": null, 00:14:35.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.039 "is_configured": false, 00:14:35.039 "data_offset": 0, 00:14:35.039 "data_size": 63488 00:14:35.039 }, 00:14:35.039 { 00:14:35.039 "name": "BaseBdev2", 00:14:35.039 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:35.039 "is_configured": true, 00:14:35.039 "data_offset": 2048, 00:14:35.039 "data_size": 63488 00:14:35.039 }, 00:14:35.039 { 00:14:35.039 "name": "BaseBdev3", 00:14:35.039 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:35.039 "is_configured": true, 00:14:35.039 "data_offset": 2048, 00:14:35.039 "data_size": 63488 00:14:35.039 } 00:14:35.039 ] 00:14:35.039 }' 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.039 13:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.347 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:35.347 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.347 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:35.347 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:35.347 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.347 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.347 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.347 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.347 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.607 "name": "raid_bdev1", 00:14:35.607 "uuid": "ef5e4e94-912e-42fe-9ce6-16d2af2d0955", 00:14:35.607 "strip_size_kb": 64, 00:14:35.607 "state": "online", 00:14:35.607 "raid_level": "raid5f", 00:14:35.607 "superblock": true, 00:14:35.607 "num_base_bdevs": 3, 00:14:35.607 "num_base_bdevs_discovered": 2, 00:14:35.607 "num_base_bdevs_operational": 2, 00:14:35.607 "base_bdevs_list": [ 00:14:35.607 { 00:14:35.607 "name": null, 00:14:35.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.607 "is_configured": false, 00:14:35.607 "data_offset": 0, 00:14:35.607 "data_size": 63488 00:14:35.607 }, 00:14:35.607 { 00:14:35.607 "name": "BaseBdev2", 00:14:35.607 "uuid": "aad85313-024f-5422-a1c6-00f4323d2033", 00:14:35.607 "is_configured": true, 00:14:35.607 "data_offset": 2048, 00:14:35.607 "data_size": 63488 00:14:35.607 }, 00:14:35.607 { 00:14:35.607 "name": "BaseBdev3", 00:14:35.607 "uuid": "99c41507-459b-50b8-80ae-906f7f39dc8c", 00:14:35.607 "is_configured": true, 00:14:35.607 "data_offset": 2048, 00:14:35.607 "data_size": 63488 00:14:35.607 } 00:14:35.607 ] 00:14:35.607 }' 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92266 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 92266 ']' 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 92266 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92266 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.607 killing process with pid 92266 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92266' 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 92266 00:14:35.607 Received shutdown signal, test time was about 60.000000 seconds 00:14:35.607 00:14:35.607 Latency(us) 00:14:35.607 [2024-11-20T13:28:17.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.607 [2024-11-20T13:28:17.275Z] =================================================================================================================== 00:14:35.607 [2024-11-20T13:28:17.275Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:35.607 [2024-11-20 13:28:17.170168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:35.607 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 92266 00:14:35.607 [2024-11-20 13:28:17.170372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.607 [2024-11-20 13:28:17.170497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.607 [2024-11-20 13:28:17.170520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:14:35.607 [2024-11-20 13:28:17.222614] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.867 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:35.867 00:14:35.867 real 0m22.620s 00:14:35.867 user 0m29.860s 00:14:35.867 sys 0m2.766s 00:14:35.867 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.867 13:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.867 ************************************ 00:14:35.867 END TEST raid5f_rebuild_test_sb 00:14:35.867 ************************************ 00:14:35.867 13:28:17 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:35.867 13:28:17 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:35.867 13:28:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:35.867 13:28:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.867 13:28:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.867 ************************************ 00:14:35.867 START TEST raid5f_state_function_test 00:14:35.867 ************************************ 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93007 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:35.867 Process raid pid: 93007 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93007' 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93007 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 93007 ']' 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.867 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.125 [2024-11-20 13:28:17.610200] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:36.125 [2024-11-20 13:28:17.610383] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.125 [2024-11-20 13:28:17.755344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.125 [2024-11-20 13:28:17.786630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.384 [2024-11-20 13:28:17.835315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.384 [2024-11-20 13:28:17.835387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.384 [2024-11-20 13:28:17.892250] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.384 [2024-11-20 13:28:17.892336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.384 [2024-11-20 13:28:17.892350] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.384 [2024-11-20 13:28:17.892364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.384 [2024-11-20 13:28:17.892374] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.384 [2024-11-20 13:28:17.892389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.384 [2024-11-20 13:28:17.892399] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:36.384 [2024-11-20 13:28:17.892413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.384 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.384 "name": "Existed_Raid", 00:14:36.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.384 "strip_size_kb": 64, 00:14:36.384 "state": "configuring", 00:14:36.384 "raid_level": "raid5f", 00:14:36.384 "superblock": false, 00:14:36.384 "num_base_bdevs": 4, 00:14:36.384 "num_base_bdevs_discovered": 0, 00:14:36.384 "num_base_bdevs_operational": 4, 00:14:36.384 "base_bdevs_list": [ 00:14:36.384 { 00:14:36.384 "name": "BaseBdev1", 00:14:36.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.384 "is_configured": false, 00:14:36.384 "data_offset": 0, 00:14:36.384 "data_size": 0 00:14:36.384 }, 00:14:36.384 { 00:14:36.384 "name": "BaseBdev2", 00:14:36.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.384 "is_configured": false, 00:14:36.384 "data_offset": 0, 00:14:36.384 "data_size": 0 00:14:36.384 }, 00:14:36.385 { 00:14:36.385 "name": "BaseBdev3", 00:14:36.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.385 "is_configured": false, 00:14:36.385 "data_offset": 0, 00:14:36.385 "data_size": 0 00:14:36.385 }, 00:14:36.385 { 00:14:36.385 "name": "BaseBdev4", 00:14:36.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.385 "is_configured": false, 00:14:36.385 "data_offset": 0, 00:14:36.385 "data_size": 0 00:14:36.385 } 00:14:36.385 ] 00:14:36.385 }' 00:14:36.385 13:28:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.385 13:28:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.952 [2024-11-20 13:28:18.335414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.952 [2024-11-20 13:28:18.335480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.952 [2024-11-20 13:28:18.343464] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.952 [2024-11-20 13:28:18.343557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.952 [2024-11-20 13:28:18.343572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.952 [2024-11-20 13:28:18.343587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.952 [2024-11-20 13:28:18.343597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.952 [2024-11-20 13:28:18.343612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.952 [2024-11-20 13:28:18.343621] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:36.952 [2024-11-20 13:28:18.343639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.952 [2024-11-20 13:28:18.363210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.952 BaseBdev1 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.952 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.952 [ 00:14:36.952 { 00:14:36.952 "name": "BaseBdev1", 00:14:36.952 "aliases": [ 00:14:36.952 "4b85f866-b53f-482a-8b98-329e5d65b635" 00:14:36.952 ], 00:14:36.952 "product_name": "Malloc disk", 00:14:36.952 "block_size": 512, 00:14:36.952 "num_blocks": 65536, 00:14:36.952 "uuid": "4b85f866-b53f-482a-8b98-329e5d65b635", 00:14:36.952 "assigned_rate_limits": { 00:14:36.952 "rw_ios_per_sec": 0, 00:14:36.953 "rw_mbytes_per_sec": 0, 00:14:36.953 "r_mbytes_per_sec": 0, 00:14:36.953 "w_mbytes_per_sec": 0 00:14:36.953 }, 00:14:36.953 "claimed": true, 00:14:36.953 "claim_type": "exclusive_write", 00:14:36.953 "zoned": false, 00:14:36.953 "supported_io_types": { 00:14:36.953 "read": true, 00:14:36.953 "write": true, 00:14:36.953 "unmap": true, 00:14:36.953 "flush": true, 00:14:36.953 "reset": true, 00:14:36.953 "nvme_admin": false, 00:14:36.953 "nvme_io": false, 00:14:36.953 "nvme_io_md": false, 00:14:36.953 "write_zeroes": true, 00:14:36.953 "zcopy": true, 00:14:36.953 "get_zone_info": false, 00:14:36.953 "zone_management": false, 00:14:36.953 "zone_append": false, 00:14:36.953 "compare": false, 00:14:36.953 "compare_and_write": false, 00:14:36.953 "abort": true, 00:14:36.953 "seek_hole": false, 00:14:36.953 "seek_data": false, 00:14:36.953 "copy": true, 00:14:36.953 "nvme_iov_md": false 00:14:36.953 }, 00:14:36.953 "memory_domains": [ 00:14:36.953 { 00:14:36.953 "dma_device_id": "system", 00:14:36.953 "dma_device_type": 1 00:14:36.953 }, 00:14:36.953 { 00:14:36.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.953 "dma_device_type": 2 00:14:36.953 } 00:14:36.953 ], 00:14:36.953 "driver_specific": {} 00:14:36.953 } 00:14:36.953 ] 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.953 "name": "Existed_Raid", 00:14:36.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.953 "strip_size_kb": 64, 00:14:36.953 "state": "configuring", 00:14:36.953 "raid_level": "raid5f", 00:14:36.953 "superblock": false, 00:14:36.953 "num_base_bdevs": 4, 00:14:36.953 "num_base_bdevs_discovered": 1, 00:14:36.953 "num_base_bdevs_operational": 4, 00:14:36.953 "base_bdevs_list": [ 00:14:36.953 { 00:14:36.953 "name": "BaseBdev1", 00:14:36.953 "uuid": "4b85f866-b53f-482a-8b98-329e5d65b635", 00:14:36.953 "is_configured": true, 00:14:36.953 "data_offset": 0, 00:14:36.953 "data_size": 65536 00:14:36.953 }, 00:14:36.953 { 00:14:36.953 "name": "BaseBdev2", 00:14:36.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.953 "is_configured": false, 00:14:36.953 "data_offset": 0, 00:14:36.953 "data_size": 0 00:14:36.953 }, 00:14:36.953 { 00:14:36.953 "name": "BaseBdev3", 00:14:36.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.953 "is_configured": false, 00:14:36.953 "data_offset": 0, 00:14:36.953 "data_size": 0 00:14:36.953 }, 00:14:36.953 { 00:14:36.953 "name": "BaseBdev4", 00:14:36.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.953 "is_configured": false, 00:14:36.953 "data_offset": 0, 00:14:36.953 "data_size": 0 00:14:36.953 } 00:14:36.953 ] 00:14:36.953 }' 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.953 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.212 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.213 [2024-11-20 13:28:18.835221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.213 [2024-11-20 13:28:18.835312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.213 [2024-11-20 13:28:18.843321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.213 [2024-11-20 13:28:18.845666] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:37.213 [2024-11-20 13:28:18.845745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:37.213 [2024-11-20 13:28:18.845760] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:37.213 [2024-11-20 13:28:18.845775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:37.213 [2024-11-20 13:28:18.845786] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:37.213 [2024-11-20 13:28:18.845799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.213 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.508 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.508 "name": "Existed_Raid", 00:14:37.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.508 "strip_size_kb": 64, 00:14:37.508 "state": "configuring", 00:14:37.508 "raid_level": "raid5f", 00:14:37.508 "superblock": false, 00:14:37.508 "num_base_bdevs": 4, 00:14:37.508 "num_base_bdevs_discovered": 1, 00:14:37.508 "num_base_bdevs_operational": 4, 00:14:37.508 "base_bdevs_list": [ 00:14:37.508 { 00:14:37.508 "name": "BaseBdev1", 00:14:37.508 "uuid": "4b85f866-b53f-482a-8b98-329e5d65b635", 00:14:37.508 "is_configured": true, 00:14:37.508 "data_offset": 0, 00:14:37.508 "data_size": 65536 00:14:37.508 }, 00:14:37.508 { 00:14:37.508 "name": "BaseBdev2", 00:14:37.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.509 "is_configured": false, 00:14:37.509 "data_offset": 0, 00:14:37.509 "data_size": 0 00:14:37.509 }, 00:14:37.509 { 00:14:37.509 "name": "BaseBdev3", 00:14:37.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.509 "is_configured": false, 00:14:37.509 "data_offset": 0, 00:14:37.509 "data_size": 0 00:14:37.509 }, 00:14:37.509 { 00:14:37.509 "name": "BaseBdev4", 00:14:37.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.509 "is_configured": false, 00:14:37.509 "data_offset": 0, 00:14:37.509 "data_size": 0 00:14:37.509 } 00:14:37.509 ] 00:14:37.509 }' 00:14:37.509 13:28:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.509 13:28:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.767 [2024-11-20 13:28:19.266851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.767 BaseBdev2 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.767 [ 00:14:37.767 { 00:14:37.767 "name": "BaseBdev2", 00:14:37.767 "aliases": [ 00:14:37.767 "99cc543e-ebf6-4878-a565-e0bee0a73958" 00:14:37.767 ], 00:14:37.767 "product_name": "Malloc disk", 00:14:37.767 "block_size": 512, 00:14:37.767 "num_blocks": 65536, 00:14:37.767 "uuid": "99cc543e-ebf6-4878-a565-e0bee0a73958", 00:14:37.767 "assigned_rate_limits": { 00:14:37.767 "rw_ios_per_sec": 0, 00:14:37.767 "rw_mbytes_per_sec": 0, 00:14:37.767 "r_mbytes_per_sec": 0, 00:14:37.767 "w_mbytes_per_sec": 0 00:14:37.767 }, 00:14:37.767 "claimed": true, 00:14:37.767 "claim_type": "exclusive_write", 00:14:37.767 "zoned": false, 00:14:37.767 "supported_io_types": { 00:14:37.767 "read": true, 00:14:37.767 "write": true, 00:14:37.767 "unmap": true, 00:14:37.767 "flush": true, 00:14:37.767 "reset": true, 00:14:37.767 "nvme_admin": false, 00:14:37.767 "nvme_io": false, 00:14:37.767 "nvme_io_md": false, 00:14:37.767 "write_zeroes": true, 00:14:37.767 "zcopy": true, 00:14:37.767 "get_zone_info": false, 00:14:37.767 "zone_management": false, 00:14:37.767 "zone_append": false, 00:14:37.767 "compare": false, 00:14:37.767 "compare_and_write": false, 00:14:37.767 "abort": true, 00:14:37.767 "seek_hole": false, 00:14:37.767 "seek_data": false, 00:14:37.767 "copy": true, 00:14:37.767 "nvme_iov_md": false 00:14:37.767 }, 00:14:37.767 "memory_domains": [ 00:14:37.767 { 00:14:37.767 "dma_device_id": "system", 00:14:37.767 "dma_device_type": 1 00:14:37.767 }, 00:14:37.767 { 00:14:37.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.767 "dma_device_type": 2 00:14:37.767 } 00:14:37.767 ], 00:14:37.767 "driver_specific": {} 00:14:37.767 } 00:14:37.767 ] 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.767 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.767 "name": "Existed_Raid", 00:14:37.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.767 "strip_size_kb": 64, 00:14:37.767 "state": "configuring", 00:14:37.767 "raid_level": "raid5f", 00:14:37.767 "superblock": false, 00:14:37.767 "num_base_bdevs": 4, 00:14:37.767 "num_base_bdevs_discovered": 2, 00:14:37.767 "num_base_bdevs_operational": 4, 00:14:37.767 "base_bdevs_list": [ 00:14:37.767 { 00:14:37.767 "name": "BaseBdev1", 00:14:37.767 "uuid": "4b85f866-b53f-482a-8b98-329e5d65b635", 00:14:37.767 "is_configured": true, 00:14:37.767 "data_offset": 0, 00:14:37.767 "data_size": 65536 00:14:37.767 }, 00:14:37.767 { 00:14:37.767 "name": "BaseBdev2", 00:14:37.767 "uuid": "99cc543e-ebf6-4878-a565-e0bee0a73958", 00:14:37.767 "is_configured": true, 00:14:37.768 "data_offset": 0, 00:14:37.768 "data_size": 65536 00:14:37.768 }, 00:14:37.768 { 00:14:37.768 "name": "BaseBdev3", 00:14:37.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.768 "is_configured": false, 00:14:37.768 "data_offset": 0, 00:14:37.768 "data_size": 0 00:14:37.768 }, 00:14:37.768 { 00:14:37.768 "name": "BaseBdev4", 00:14:37.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.768 "is_configured": false, 00:14:37.768 "data_offset": 0, 00:14:37.768 "data_size": 0 00:14:37.768 } 00:14:37.768 ] 00:14:37.768 }' 00:14:37.768 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.768 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.336 [2024-11-20 13:28:19.751933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.336 BaseBdev3 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.336 [ 00:14:38.336 { 00:14:38.336 "name": "BaseBdev3", 00:14:38.336 "aliases": [ 00:14:38.336 "f9e2f4c5-cc10-42d6-b249-32d37ca13436" 00:14:38.336 ], 00:14:38.336 "product_name": "Malloc disk", 00:14:38.336 "block_size": 512, 00:14:38.336 "num_blocks": 65536, 00:14:38.336 "uuid": "f9e2f4c5-cc10-42d6-b249-32d37ca13436", 00:14:38.336 "assigned_rate_limits": { 00:14:38.336 "rw_ios_per_sec": 0, 00:14:38.336 "rw_mbytes_per_sec": 0, 00:14:38.336 "r_mbytes_per_sec": 0, 00:14:38.336 "w_mbytes_per_sec": 0 00:14:38.336 }, 00:14:38.336 "claimed": true, 00:14:38.336 "claim_type": "exclusive_write", 00:14:38.336 "zoned": false, 00:14:38.336 "supported_io_types": { 00:14:38.336 "read": true, 00:14:38.336 "write": true, 00:14:38.336 "unmap": true, 00:14:38.336 "flush": true, 00:14:38.336 "reset": true, 00:14:38.336 "nvme_admin": false, 00:14:38.336 "nvme_io": false, 00:14:38.336 "nvme_io_md": false, 00:14:38.336 "write_zeroes": true, 00:14:38.336 "zcopy": true, 00:14:38.336 "get_zone_info": false, 00:14:38.336 "zone_management": false, 00:14:38.336 "zone_append": false, 00:14:38.336 "compare": false, 00:14:38.336 "compare_and_write": false, 00:14:38.336 "abort": true, 00:14:38.336 "seek_hole": false, 00:14:38.336 "seek_data": false, 00:14:38.336 "copy": true, 00:14:38.336 "nvme_iov_md": false 00:14:38.336 }, 00:14:38.336 "memory_domains": [ 00:14:38.336 { 00:14:38.336 "dma_device_id": "system", 00:14:38.336 "dma_device_type": 1 00:14:38.336 }, 00:14:38.336 { 00:14:38.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.336 "dma_device_type": 2 00:14:38.336 } 00:14:38.336 ], 00:14:38.336 "driver_specific": {} 00:14:38.336 } 00:14:38.336 ] 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.336 "name": "Existed_Raid", 00:14:38.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.336 "strip_size_kb": 64, 00:14:38.336 "state": "configuring", 00:14:38.336 "raid_level": "raid5f", 00:14:38.336 "superblock": false, 00:14:38.336 "num_base_bdevs": 4, 00:14:38.336 "num_base_bdevs_discovered": 3, 00:14:38.336 "num_base_bdevs_operational": 4, 00:14:38.336 "base_bdevs_list": [ 00:14:38.336 { 00:14:38.336 "name": "BaseBdev1", 00:14:38.336 "uuid": "4b85f866-b53f-482a-8b98-329e5d65b635", 00:14:38.336 "is_configured": true, 00:14:38.336 "data_offset": 0, 00:14:38.336 "data_size": 65536 00:14:38.336 }, 00:14:38.336 { 00:14:38.336 "name": "BaseBdev2", 00:14:38.336 "uuid": "99cc543e-ebf6-4878-a565-e0bee0a73958", 00:14:38.336 "is_configured": true, 00:14:38.336 "data_offset": 0, 00:14:38.336 "data_size": 65536 00:14:38.336 }, 00:14:38.336 { 00:14:38.336 "name": "BaseBdev3", 00:14:38.336 "uuid": "f9e2f4c5-cc10-42d6-b249-32d37ca13436", 00:14:38.336 "is_configured": true, 00:14:38.336 "data_offset": 0, 00:14:38.336 "data_size": 65536 00:14:38.336 }, 00:14:38.336 { 00:14:38.336 "name": "BaseBdev4", 00:14:38.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.336 "is_configured": false, 00:14:38.336 "data_offset": 0, 00:14:38.336 "data_size": 0 00:14:38.336 } 00:14:38.336 ] 00:14:38.336 }' 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.336 13:28:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.595 [2024-11-20 13:28:20.251809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:38.595 [2024-11-20 13:28:20.251911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:38.595 [2024-11-20 13:28:20.251926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:38.595 [2024-11-20 13:28:20.252376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:38.595 [2024-11-20 13:28:20.253267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:38.595 [2024-11-20 13:28:20.253313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:38.595 [2024-11-20 13:28:20.253719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.595 BaseBdev4 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.595 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.853 [ 00:14:38.853 { 00:14:38.853 "name": "BaseBdev4", 00:14:38.853 "aliases": [ 00:14:38.853 "151ecc03-ec64-46d1-9ab3-63acb77e08d4" 00:14:38.853 ], 00:14:38.853 "product_name": "Malloc disk", 00:14:38.853 "block_size": 512, 00:14:38.853 "num_blocks": 65536, 00:14:38.853 "uuid": "151ecc03-ec64-46d1-9ab3-63acb77e08d4", 00:14:38.853 "assigned_rate_limits": { 00:14:38.853 "rw_ios_per_sec": 0, 00:14:38.853 "rw_mbytes_per_sec": 0, 00:14:38.853 "r_mbytes_per_sec": 0, 00:14:38.853 "w_mbytes_per_sec": 0 00:14:38.853 }, 00:14:38.853 "claimed": true, 00:14:38.853 "claim_type": "exclusive_write", 00:14:38.853 "zoned": false, 00:14:38.853 "supported_io_types": { 00:14:38.853 "read": true, 00:14:38.853 "write": true, 00:14:38.853 "unmap": true, 00:14:38.853 "flush": true, 00:14:38.853 "reset": true, 00:14:38.853 "nvme_admin": false, 00:14:38.853 "nvme_io": false, 00:14:38.853 "nvme_io_md": false, 00:14:38.853 "write_zeroes": true, 00:14:38.853 "zcopy": true, 00:14:38.853 "get_zone_info": false, 00:14:38.853 "zone_management": false, 00:14:38.853 "zone_append": false, 00:14:38.853 "compare": false, 00:14:38.853 "compare_and_write": false, 00:14:38.853 "abort": true, 00:14:38.853 "seek_hole": false, 00:14:38.853 "seek_data": false, 00:14:38.853 "copy": true, 00:14:38.853 "nvme_iov_md": false 00:14:38.853 }, 00:14:38.853 "memory_domains": [ 00:14:38.853 { 00:14:38.853 "dma_device_id": "system", 00:14:38.853 "dma_device_type": 1 00:14:38.853 }, 00:14:38.853 { 00:14:38.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.853 "dma_device_type": 2 00:14:38.853 } 00:14:38.853 ], 00:14:38.853 "driver_specific": {} 00:14:38.853 } 00:14:38.853 ] 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.853 "name": "Existed_Raid", 00:14:38.853 "uuid": "e8f2caef-d5af-4ec7-84e3-a3837d8e17a5", 00:14:38.853 "strip_size_kb": 64, 00:14:38.853 "state": "online", 00:14:38.853 "raid_level": "raid5f", 00:14:38.853 "superblock": false, 00:14:38.853 "num_base_bdevs": 4, 00:14:38.853 "num_base_bdevs_discovered": 4, 00:14:38.853 "num_base_bdevs_operational": 4, 00:14:38.853 "base_bdevs_list": [ 00:14:38.853 { 00:14:38.853 "name": "BaseBdev1", 00:14:38.853 "uuid": "4b85f866-b53f-482a-8b98-329e5d65b635", 00:14:38.853 "is_configured": true, 00:14:38.853 "data_offset": 0, 00:14:38.853 "data_size": 65536 00:14:38.853 }, 00:14:38.853 { 00:14:38.853 "name": "BaseBdev2", 00:14:38.853 "uuid": "99cc543e-ebf6-4878-a565-e0bee0a73958", 00:14:38.853 "is_configured": true, 00:14:38.853 "data_offset": 0, 00:14:38.853 "data_size": 65536 00:14:38.853 }, 00:14:38.853 { 00:14:38.853 "name": "BaseBdev3", 00:14:38.853 "uuid": "f9e2f4c5-cc10-42d6-b249-32d37ca13436", 00:14:38.853 "is_configured": true, 00:14:38.853 "data_offset": 0, 00:14:38.853 "data_size": 65536 00:14:38.853 }, 00:14:38.853 { 00:14:38.853 "name": "BaseBdev4", 00:14:38.853 "uuid": "151ecc03-ec64-46d1-9ab3-63acb77e08d4", 00:14:38.853 "is_configured": true, 00:14:38.853 "data_offset": 0, 00:14:38.853 "data_size": 65536 00:14:38.853 } 00:14:38.853 ] 00:14:38.853 }' 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.853 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.112 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:39.112 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:39.112 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.112 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.112 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.112 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.112 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.112 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:39.112 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.112 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.112 [2024-11-20 13:28:20.768007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.112 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.370 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.370 "name": "Existed_Raid", 00:14:39.370 "aliases": [ 00:14:39.370 "e8f2caef-d5af-4ec7-84e3-a3837d8e17a5" 00:14:39.370 ], 00:14:39.370 "product_name": "Raid Volume", 00:14:39.370 "block_size": 512, 00:14:39.370 "num_blocks": 196608, 00:14:39.370 "uuid": "e8f2caef-d5af-4ec7-84e3-a3837d8e17a5", 00:14:39.370 "assigned_rate_limits": { 00:14:39.371 "rw_ios_per_sec": 0, 00:14:39.371 "rw_mbytes_per_sec": 0, 00:14:39.371 "r_mbytes_per_sec": 0, 00:14:39.371 "w_mbytes_per_sec": 0 00:14:39.371 }, 00:14:39.371 "claimed": false, 00:14:39.371 "zoned": false, 00:14:39.371 "supported_io_types": { 00:14:39.371 "read": true, 00:14:39.371 "write": true, 00:14:39.371 "unmap": false, 00:14:39.371 "flush": false, 00:14:39.371 "reset": true, 00:14:39.371 "nvme_admin": false, 00:14:39.371 "nvme_io": false, 00:14:39.371 "nvme_io_md": false, 00:14:39.371 "write_zeroes": true, 00:14:39.371 "zcopy": false, 00:14:39.371 "get_zone_info": false, 00:14:39.371 "zone_management": false, 00:14:39.371 "zone_append": false, 00:14:39.371 "compare": false, 00:14:39.371 "compare_and_write": false, 00:14:39.371 "abort": false, 00:14:39.371 "seek_hole": false, 00:14:39.371 "seek_data": false, 00:14:39.371 "copy": false, 00:14:39.371 "nvme_iov_md": false 00:14:39.371 }, 00:14:39.371 "driver_specific": { 00:14:39.371 "raid": { 00:14:39.371 "uuid": "e8f2caef-d5af-4ec7-84e3-a3837d8e17a5", 00:14:39.371 "strip_size_kb": 64, 00:14:39.371 "state": "online", 00:14:39.371 "raid_level": "raid5f", 00:14:39.371 "superblock": false, 00:14:39.371 "num_base_bdevs": 4, 00:14:39.371 "num_base_bdevs_discovered": 4, 00:14:39.371 "num_base_bdevs_operational": 4, 00:14:39.371 "base_bdevs_list": [ 00:14:39.371 { 00:14:39.371 "name": "BaseBdev1", 00:14:39.371 "uuid": "4b85f866-b53f-482a-8b98-329e5d65b635", 00:14:39.371 "is_configured": true, 00:14:39.371 "data_offset": 0, 00:14:39.371 "data_size": 65536 00:14:39.371 }, 00:14:39.371 { 00:14:39.371 "name": "BaseBdev2", 00:14:39.371 "uuid": "99cc543e-ebf6-4878-a565-e0bee0a73958", 00:14:39.371 "is_configured": true, 00:14:39.371 "data_offset": 0, 00:14:39.371 "data_size": 65536 00:14:39.371 }, 00:14:39.371 { 00:14:39.371 "name": "BaseBdev3", 00:14:39.371 "uuid": "f9e2f4c5-cc10-42d6-b249-32d37ca13436", 00:14:39.371 "is_configured": true, 00:14:39.371 "data_offset": 0, 00:14:39.371 "data_size": 65536 00:14:39.371 }, 00:14:39.371 { 00:14:39.371 "name": "BaseBdev4", 00:14:39.371 "uuid": "151ecc03-ec64-46d1-9ab3-63acb77e08d4", 00:14:39.371 "is_configured": true, 00:14:39.371 "data_offset": 0, 00:14:39.371 "data_size": 65536 00:14:39.371 } 00:14:39.371 ] 00:14:39.371 } 00:14:39.371 } 00:14:39.371 }' 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:39.371 BaseBdev2 00:14:39.371 BaseBdev3 00:14:39.371 BaseBdev4' 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.371 13:28:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.371 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.371 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.371 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.371 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.371 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:39.371 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.371 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.371 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.669 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.669 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.669 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.669 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:39.669 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.669 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.669 [2024-11-20 13:28:21.083834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.669 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.669 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:39.669 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:39.669 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.670 "name": "Existed_Raid", 00:14:39.670 "uuid": "e8f2caef-d5af-4ec7-84e3-a3837d8e17a5", 00:14:39.670 "strip_size_kb": 64, 00:14:39.670 "state": "online", 00:14:39.670 "raid_level": "raid5f", 00:14:39.670 "superblock": false, 00:14:39.670 "num_base_bdevs": 4, 00:14:39.670 "num_base_bdevs_discovered": 3, 00:14:39.670 "num_base_bdevs_operational": 3, 00:14:39.670 "base_bdevs_list": [ 00:14:39.670 { 00:14:39.670 "name": null, 00:14:39.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.670 "is_configured": false, 00:14:39.670 "data_offset": 0, 00:14:39.670 "data_size": 65536 00:14:39.670 }, 00:14:39.670 { 00:14:39.670 "name": "BaseBdev2", 00:14:39.670 "uuid": "99cc543e-ebf6-4878-a565-e0bee0a73958", 00:14:39.670 "is_configured": true, 00:14:39.670 "data_offset": 0, 00:14:39.670 "data_size": 65536 00:14:39.670 }, 00:14:39.670 { 00:14:39.670 "name": "BaseBdev3", 00:14:39.670 "uuid": "f9e2f4c5-cc10-42d6-b249-32d37ca13436", 00:14:39.670 "is_configured": true, 00:14:39.670 "data_offset": 0, 00:14:39.670 "data_size": 65536 00:14:39.670 }, 00:14:39.670 { 00:14:39.670 "name": "BaseBdev4", 00:14:39.670 "uuid": "151ecc03-ec64-46d1-9ab3-63acb77e08d4", 00:14:39.670 "is_configured": true, 00:14:39.670 "data_offset": 0, 00:14:39.670 "data_size": 65536 00:14:39.670 } 00:14:39.670 ] 00:14:39.670 }' 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.670 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.950 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:39.950 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:39.950 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.950 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.950 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:39.950 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.950 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.211 [2024-11-20 13:28:21.644300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:40.211 [2024-11-20 13:28:21.644547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.211 [2024-11-20 13:28:21.657230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.211 [2024-11-20 13:28:21.709293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.211 [2024-11-20 13:28:21.781829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:40.211 [2024-11-20 13:28:21.781906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.211 BaseBdev2 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.211 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.470 [ 00:14:40.470 { 00:14:40.470 "name": "BaseBdev2", 00:14:40.470 "aliases": [ 00:14:40.470 "3494611b-046d-4576-85ac-6e1b670b7322" 00:14:40.470 ], 00:14:40.470 "product_name": "Malloc disk", 00:14:40.470 "block_size": 512, 00:14:40.470 "num_blocks": 65536, 00:14:40.470 "uuid": "3494611b-046d-4576-85ac-6e1b670b7322", 00:14:40.470 "assigned_rate_limits": { 00:14:40.470 "rw_ios_per_sec": 0, 00:14:40.470 "rw_mbytes_per_sec": 0, 00:14:40.470 "r_mbytes_per_sec": 0, 00:14:40.470 "w_mbytes_per_sec": 0 00:14:40.470 }, 00:14:40.470 "claimed": false, 00:14:40.470 "zoned": false, 00:14:40.470 "supported_io_types": { 00:14:40.470 "read": true, 00:14:40.470 "write": true, 00:14:40.470 "unmap": true, 00:14:40.470 "flush": true, 00:14:40.470 "reset": true, 00:14:40.470 "nvme_admin": false, 00:14:40.470 "nvme_io": false, 00:14:40.470 "nvme_io_md": false, 00:14:40.470 "write_zeroes": true, 00:14:40.470 "zcopy": true, 00:14:40.470 "get_zone_info": false, 00:14:40.470 "zone_management": false, 00:14:40.470 "zone_append": false, 00:14:40.470 "compare": false, 00:14:40.470 "compare_and_write": false, 00:14:40.470 "abort": true, 00:14:40.470 "seek_hole": false, 00:14:40.470 "seek_data": false, 00:14:40.470 "copy": true, 00:14:40.470 "nvme_iov_md": false 00:14:40.470 }, 00:14:40.470 "memory_domains": [ 00:14:40.470 { 00:14:40.470 "dma_device_id": "system", 00:14:40.470 "dma_device_type": 1 00:14:40.470 }, 00:14:40.470 { 00:14:40.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.470 "dma_device_type": 2 00:14:40.470 } 00:14:40.470 ], 00:14:40.470 "driver_specific": {} 00:14:40.470 } 00:14:40.470 ] 00:14:40.470 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.470 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.470 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:40.470 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.470 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:40.470 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.470 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.470 BaseBdev3 00:14:40.470 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.471 [ 00:14:40.471 { 00:14:40.471 "name": "BaseBdev3", 00:14:40.471 "aliases": [ 00:14:40.471 "99fdbf79-060c-46a4-8ac5-a597a55515e1" 00:14:40.471 ], 00:14:40.471 "product_name": "Malloc disk", 00:14:40.471 "block_size": 512, 00:14:40.471 "num_blocks": 65536, 00:14:40.471 "uuid": "99fdbf79-060c-46a4-8ac5-a597a55515e1", 00:14:40.471 "assigned_rate_limits": { 00:14:40.471 "rw_ios_per_sec": 0, 00:14:40.471 "rw_mbytes_per_sec": 0, 00:14:40.471 "r_mbytes_per_sec": 0, 00:14:40.471 "w_mbytes_per_sec": 0 00:14:40.471 }, 00:14:40.471 "claimed": false, 00:14:40.471 "zoned": false, 00:14:40.471 "supported_io_types": { 00:14:40.471 "read": true, 00:14:40.471 "write": true, 00:14:40.471 "unmap": true, 00:14:40.471 "flush": true, 00:14:40.471 "reset": true, 00:14:40.471 "nvme_admin": false, 00:14:40.471 "nvme_io": false, 00:14:40.471 "nvme_io_md": false, 00:14:40.471 "write_zeroes": true, 00:14:40.471 "zcopy": true, 00:14:40.471 "get_zone_info": false, 00:14:40.471 "zone_management": false, 00:14:40.471 "zone_append": false, 00:14:40.471 "compare": false, 00:14:40.471 "compare_and_write": false, 00:14:40.471 "abort": true, 00:14:40.471 "seek_hole": false, 00:14:40.471 "seek_data": false, 00:14:40.471 "copy": true, 00:14:40.471 "nvme_iov_md": false 00:14:40.471 }, 00:14:40.471 "memory_domains": [ 00:14:40.471 { 00:14:40.471 "dma_device_id": "system", 00:14:40.471 "dma_device_type": 1 00:14:40.471 }, 00:14:40.471 { 00:14:40.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.471 "dma_device_type": 2 00:14:40.471 } 00:14:40.471 ], 00:14:40.471 "driver_specific": {} 00:14:40.471 } 00:14:40.471 ] 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.471 BaseBdev4 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.471 13:28:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.471 [ 00:14:40.471 { 00:14:40.471 "name": "BaseBdev4", 00:14:40.471 "aliases": [ 00:14:40.471 "01627dbf-d648-463a-bd5b-51f88a0d99a4" 00:14:40.471 ], 00:14:40.471 "product_name": "Malloc disk", 00:14:40.471 "block_size": 512, 00:14:40.471 "num_blocks": 65536, 00:14:40.471 "uuid": "01627dbf-d648-463a-bd5b-51f88a0d99a4", 00:14:40.471 "assigned_rate_limits": { 00:14:40.471 "rw_ios_per_sec": 0, 00:14:40.471 "rw_mbytes_per_sec": 0, 00:14:40.471 "r_mbytes_per_sec": 0, 00:14:40.471 "w_mbytes_per_sec": 0 00:14:40.471 }, 00:14:40.471 "claimed": false, 00:14:40.471 "zoned": false, 00:14:40.471 "supported_io_types": { 00:14:40.471 "read": true, 00:14:40.471 "write": true, 00:14:40.471 "unmap": true, 00:14:40.471 "flush": true, 00:14:40.471 "reset": true, 00:14:40.471 "nvme_admin": false, 00:14:40.471 "nvme_io": false, 00:14:40.471 "nvme_io_md": false, 00:14:40.471 "write_zeroes": true, 00:14:40.471 "zcopy": true, 00:14:40.471 "get_zone_info": false, 00:14:40.471 "zone_management": false, 00:14:40.471 "zone_append": false, 00:14:40.471 "compare": false, 00:14:40.471 "compare_and_write": false, 00:14:40.471 "abort": true, 00:14:40.471 "seek_hole": false, 00:14:40.471 "seek_data": false, 00:14:40.471 "copy": true, 00:14:40.471 "nvme_iov_md": false 00:14:40.471 }, 00:14:40.471 "memory_domains": [ 00:14:40.471 { 00:14:40.471 "dma_device_id": "system", 00:14:40.471 "dma_device_type": 1 00:14:40.471 }, 00:14:40.471 { 00:14:40.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.471 "dma_device_type": 2 00:14:40.471 } 00:14:40.471 ], 00:14:40.471 "driver_specific": {} 00:14:40.471 } 00:14:40.471 ] 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.471 [2024-11-20 13:28:22.017676] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.471 [2024-11-20 13:28:22.017912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.471 [2024-11-20 13:28:22.018043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.471 [2024-11-20 13:28:22.021177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.471 [2024-11-20 13:28:22.021433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.471 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.472 "name": "Existed_Raid", 00:14:40.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.472 "strip_size_kb": 64, 00:14:40.472 "state": "configuring", 00:14:40.472 "raid_level": "raid5f", 00:14:40.472 "superblock": false, 00:14:40.472 "num_base_bdevs": 4, 00:14:40.472 "num_base_bdevs_discovered": 3, 00:14:40.472 "num_base_bdevs_operational": 4, 00:14:40.472 "base_bdevs_list": [ 00:14:40.472 { 00:14:40.472 "name": "BaseBdev1", 00:14:40.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.472 "is_configured": false, 00:14:40.472 "data_offset": 0, 00:14:40.472 "data_size": 0 00:14:40.472 }, 00:14:40.472 { 00:14:40.472 "name": "BaseBdev2", 00:14:40.472 "uuid": "3494611b-046d-4576-85ac-6e1b670b7322", 00:14:40.472 "is_configured": true, 00:14:40.472 "data_offset": 0, 00:14:40.472 "data_size": 65536 00:14:40.472 }, 00:14:40.472 { 00:14:40.472 "name": "BaseBdev3", 00:14:40.472 "uuid": "99fdbf79-060c-46a4-8ac5-a597a55515e1", 00:14:40.472 "is_configured": true, 00:14:40.472 "data_offset": 0, 00:14:40.472 "data_size": 65536 00:14:40.472 }, 00:14:40.472 { 00:14:40.472 "name": "BaseBdev4", 00:14:40.472 "uuid": "01627dbf-d648-463a-bd5b-51f88a0d99a4", 00:14:40.472 "is_configured": true, 00:14:40.472 "data_offset": 0, 00:14:40.472 "data_size": 65536 00:14:40.472 } 00:14:40.472 ] 00:14:40.472 }' 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.472 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.040 [2024-11-20 13:28:22.493086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.040 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.040 "name": "Existed_Raid", 00:14:41.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.040 "strip_size_kb": 64, 00:14:41.040 "state": "configuring", 00:14:41.040 "raid_level": "raid5f", 00:14:41.040 "superblock": false, 00:14:41.040 "num_base_bdevs": 4, 00:14:41.040 "num_base_bdevs_discovered": 2, 00:14:41.040 "num_base_bdevs_operational": 4, 00:14:41.040 "base_bdevs_list": [ 00:14:41.040 { 00:14:41.040 "name": "BaseBdev1", 00:14:41.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.040 "is_configured": false, 00:14:41.040 "data_offset": 0, 00:14:41.040 "data_size": 0 00:14:41.040 }, 00:14:41.040 { 00:14:41.040 "name": null, 00:14:41.040 "uuid": "3494611b-046d-4576-85ac-6e1b670b7322", 00:14:41.040 "is_configured": false, 00:14:41.040 "data_offset": 0, 00:14:41.040 "data_size": 65536 00:14:41.040 }, 00:14:41.040 { 00:14:41.040 "name": "BaseBdev3", 00:14:41.040 "uuid": "99fdbf79-060c-46a4-8ac5-a597a55515e1", 00:14:41.040 "is_configured": true, 00:14:41.040 "data_offset": 0, 00:14:41.040 "data_size": 65536 00:14:41.041 }, 00:14:41.041 { 00:14:41.041 "name": "BaseBdev4", 00:14:41.041 "uuid": "01627dbf-d648-463a-bd5b-51f88a0d99a4", 00:14:41.041 "is_configured": true, 00:14:41.041 "data_offset": 0, 00:14:41.041 "data_size": 65536 00:14:41.041 } 00:14:41.041 ] 00:14:41.041 }' 00:14:41.041 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.041 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.299 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.299 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.299 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.299 13:28:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:41.557 13:28:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.557 [2024-11-20 13:28:23.023974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.557 BaseBdev1 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.557 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.557 [ 00:14:41.557 { 00:14:41.557 "name": "BaseBdev1", 00:14:41.557 "aliases": [ 00:14:41.557 "968c4a79-4805-44ee-80ea-26949bde1ab7" 00:14:41.557 ], 00:14:41.557 "product_name": "Malloc disk", 00:14:41.557 "block_size": 512, 00:14:41.557 "num_blocks": 65536, 00:14:41.557 "uuid": "968c4a79-4805-44ee-80ea-26949bde1ab7", 00:14:41.557 "assigned_rate_limits": { 00:14:41.557 "rw_ios_per_sec": 0, 00:14:41.557 "rw_mbytes_per_sec": 0, 00:14:41.557 "r_mbytes_per_sec": 0, 00:14:41.557 "w_mbytes_per_sec": 0 00:14:41.557 }, 00:14:41.557 "claimed": true, 00:14:41.557 "claim_type": "exclusive_write", 00:14:41.557 "zoned": false, 00:14:41.557 "supported_io_types": { 00:14:41.557 "read": true, 00:14:41.557 "write": true, 00:14:41.557 "unmap": true, 00:14:41.557 "flush": true, 00:14:41.557 "reset": true, 00:14:41.557 "nvme_admin": false, 00:14:41.557 "nvme_io": false, 00:14:41.557 "nvme_io_md": false, 00:14:41.557 "write_zeroes": true, 00:14:41.557 "zcopy": true, 00:14:41.557 "get_zone_info": false, 00:14:41.557 "zone_management": false, 00:14:41.557 "zone_append": false, 00:14:41.557 "compare": false, 00:14:41.557 "compare_and_write": false, 00:14:41.557 "abort": true, 00:14:41.557 "seek_hole": false, 00:14:41.557 "seek_data": false, 00:14:41.557 "copy": true, 00:14:41.557 "nvme_iov_md": false 00:14:41.557 }, 00:14:41.557 "memory_domains": [ 00:14:41.557 { 00:14:41.558 "dma_device_id": "system", 00:14:41.558 "dma_device_type": 1 00:14:41.558 }, 00:14:41.558 { 00:14:41.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.558 "dma_device_type": 2 00:14:41.558 } 00:14:41.558 ], 00:14:41.558 "driver_specific": {} 00:14:41.558 } 00:14:41.558 ] 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.558 "name": "Existed_Raid", 00:14:41.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.558 "strip_size_kb": 64, 00:14:41.558 "state": "configuring", 00:14:41.558 "raid_level": "raid5f", 00:14:41.558 "superblock": false, 00:14:41.558 "num_base_bdevs": 4, 00:14:41.558 "num_base_bdevs_discovered": 3, 00:14:41.558 "num_base_bdevs_operational": 4, 00:14:41.558 "base_bdevs_list": [ 00:14:41.558 { 00:14:41.558 "name": "BaseBdev1", 00:14:41.558 "uuid": "968c4a79-4805-44ee-80ea-26949bde1ab7", 00:14:41.558 "is_configured": true, 00:14:41.558 "data_offset": 0, 00:14:41.558 "data_size": 65536 00:14:41.558 }, 00:14:41.558 { 00:14:41.558 "name": null, 00:14:41.558 "uuid": "3494611b-046d-4576-85ac-6e1b670b7322", 00:14:41.558 "is_configured": false, 00:14:41.558 "data_offset": 0, 00:14:41.558 "data_size": 65536 00:14:41.558 }, 00:14:41.558 { 00:14:41.558 "name": "BaseBdev3", 00:14:41.558 "uuid": "99fdbf79-060c-46a4-8ac5-a597a55515e1", 00:14:41.558 "is_configured": true, 00:14:41.558 "data_offset": 0, 00:14:41.558 "data_size": 65536 00:14:41.558 }, 00:14:41.558 { 00:14:41.558 "name": "BaseBdev4", 00:14:41.558 "uuid": "01627dbf-d648-463a-bd5b-51f88a0d99a4", 00:14:41.558 "is_configured": true, 00:14:41.558 "data_offset": 0, 00:14:41.558 "data_size": 65536 00:14:41.558 } 00:14:41.558 ] 00:14:41.558 }' 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.558 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.124 [2024-11-20 13:28:23.591724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.124 "name": "Existed_Raid", 00:14:42.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.124 "strip_size_kb": 64, 00:14:42.124 "state": "configuring", 00:14:42.124 "raid_level": "raid5f", 00:14:42.124 "superblock": false, 00:14:42.124 "num_base_bdevs": 4, 00:14:42.124 "num_base_bdevs_discovered": 2, 00:14:42.124 "num_base_bdevs_operational": 4, 00:14:42.124 "base_bdevs_list": [ 00:14:42.124 { 00:14:42.124 "name": "BaseBdev1", 00:14:42.124 "uuid": "968c4a79-4805-44ee-80ea-26949bde1ab7", 00:14:42.124 "is_configured": true, 00:14:42.124 "data_offset": 0, 00:14:42.124 "data_size": 65536 00:14:42.124 }, 00:14:42.124 { 00:14:42.124 "name": null, 00:14:42.124 "uuid": "3494611b-046d-4576-85ac-6e1b670b7322", 00:14:42.124 "is_configured": false, 00:14:42.124 "data_offset": 0, 00:14:42.124 "data_size": 65536 00:14:42.124 }, 00:14:42.124 { 00:14:42.124 "name": null, 00:14:42.124 "uuid": "99fdbf79-060c-46a4-8ac5-a597a55515e1", 00:14:42.124 "is_configured": false, 00:14:42.124 "data_offset": 0, 00:14:42.124 "data_size": 65536 00:14:42.124 }, 00:14:42.124 { 00:14:42.124 "name": "BaseBdev4", 00:14:42.124 "uuid": "01627dbf-d648-463a-bd5b-51f88a0d99a4", 00:14:42.124 "is_configured": true, 00:14:42.124 "data_offset": 0, 00:14:42.124 "data_size": 65536 00:14:42.124 } 00:14:42.124 ] 00:14:42.124 }' 00:14:42.124 13:28:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.125 13:28:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.382 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:42.382 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.382 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.382 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.641 [2024-11-20 13:28:24.095736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.641 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.641 "name": "Existed_Raid", 00:14:42.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.641 "strip_size_kb": 64, 00:14:42.641 "state": "configuring", 00:14:42.641 "raid_level": "raid5f", 00:14:42.641 "superblock": false, 00:14:42.641 "num_base_bdevs": 4, 00:14:42.641 "num_base_bdevs_discovered": 3, 00:14:42.641 "num_base_bdevs_operational": 4, 00:14:42.641 "base_bdevs_list": [ 00:14:42.641 { 00:14:42.641 "name": "BaseBdev1", 00:14:42.641 "uuid": "968c4a79-4805-44ee-80ea-26949bde1ab7", 00:14:42.641 "is_configured": true, 00:14:42.641 "data_offset": 0, 00:14:42.641 "data_size": 65536 00:14:42.641 }, 00:14:42.641 { 00:14:42.641 "name": null, 00:14:42.641 "uuid": "3494611b-046d-4576-85ac-6e1b670b7322", 00:14:42.641 "is_configured": false, 00:14:42.641 "data_offset": 0, 00:14:42.641 "data_size": 65536 00:14:42.641 }, 00:14:42.642 { 00:14:42.642 "name": "BaseBdev3", 00:14:42.642 "uuid": "99fdbf79-060c-46a4-8ac5-a597a55515e1", 00:14:42.642 "is_configured": true, 00:14:42.642 "data_offset": 0, 00:14:42.642 "data_size": 65536 00:14:42.642 }, 00:14:42.642 { 00:14:42.642 "name": "BaseBdev4", 00:14:42.642 "uuid": "01627dbf-d648-463a-bd5b-51f88a0d99a4", 00:14:42.642 "is_configured": true, 00:14:42.642 "data_offset": 0, 00:14:42.642 "data_size": 65536 00:14:42.642 } 00:14:42.642 ] 00:14:42.642 }' 00:14:42.642 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.642 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.899 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.899 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:42.899 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.899 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:42.899 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:42.899 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.899 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.899 [2024-11-20 13:28:24.559606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.157 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.157 "name": "Existed_Raid", 00:14:43.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.157 "strip_size_kb": 64, 00:14:43.157 "state": "configuring", 00:14:43.157 "raid_level": "raid5f", 00:14:43.157 "superblock": false, 00:14:43.157 "num_base_bdevs": 4, 00:14:43.157 "num_base_bdevs_discovered": 2, 00:14:43.157 "num_base_bdevs_operational": 4, 00:14:43.157 "base_bdevs_list": [ 00:14:43.157 { 00:14:43.157 "name": null, 00:14:43.157 "uuid": "968c4a79-4805-44ee-80ea-26949bde1ab7", 00:14:43.157 "is_configured": false, 00:14:43.157 "data_offset": 0, 00:14:43.157 "data_size": 65536 00:14:43.157 }, 00:14:43.157 { 00:14:43.157 "name": null, 00:14:43.158 "uuid": "3494611b-046d-4576-85ac-6e1b670b7322", 00:14:43.158 "is_configured": false, 00:14:43.158 "data_offset": 0, 00:14:43.158 "data_size": 65536 00:14:43.158 }, 00:14:43.158 { 00:14:43.158 "name": "BaseBdev3", 00:14:43.158 "uuid": "99fdbf79-060c-46a4-8ac5-a597a55515e1", 00:14:43.158 "is_configured": true, 00:14:43.158 "data_offset": 0, 00:14:43.158 "data_size": 65536 00:14:43.158 }, 00:14:43.158 { 00:14:43.158 "name": "BaseBdev4", 00:14:43.158 "uuid": "01627dbf-d648-463a-bd5b-51f88a0d99a4", 00:14:43.158 "is_configured": true, 00:14:43.158 "data_offset": 0, 00:14:43.158 "data_size": 65536 00:14:43.158 } 00:14:43.158 ] 00:14:43.158 }' 00:14:43.158 13:28:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.158 13:28:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 [2024-11-20 13:28:25.058233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.416 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.673 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.673 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.673 "name": "Existed_Raid", 00:14:43.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.673 "strip_size_kb": 64, 00:14:43.673 "state": "configuring", 00:14:43.673 "raid_level": "raid5f", 00:14:43.673 "superblock": false, 00:14:43.673 "num_base_bdevs": 4, 00:14:43.673 "num_base_bdevs_discovered": 3, 00:14:43.673 "num_base_bdevs_operational": 4, 00:14:43.673 "base_bdevs_list": [ 00:14:43.673 { 00:14:43.673 "name": null, 00:14:43.673 "uuid": "968c4a79-4805-44ee-80ea-26949bde1ab7", 00:14:43.673 "is_configured": false, 00:14:43.673 "data_offset": 0, 00:14:43.673 "data_size": 65536 00:14:43.673 }, 00:14:43.673 { 00:14:43.673 "name": "BaseBdev2", 00:14:43.673 "uuid": "3494611b-046d-4576-85ac-6e1b670b7322", 00:14:43.673 "is_configured": true, 00:14:43.673 "data_offset": 0, 00:14:43.673 "data_size": 65536 00:14:43.673 }, 00:14:43.673 { 00:14:43.673 "name": "BaseBdev3", 00:14:43.673 "uuid": "99fdbf79-060c-46a4-8ac5-a597a55515e1", 00:14:43.673 "is_configured": true, 00:14:43.673 "data_offset": 0, 00:14:43.673 "data_size": 65536 00:14:43.673 }, 00:14:43.673 { 00:14:43.673 "name": "BaseBdev4", 00:14:43.673 "uuid": "01627dbf-d648-463a-bd5b-51f88a0d99a4", 00:14:43.673 "is_configured": true, 00:14:43.673 "data_offset": 0, 00:14:43.673 "data_size": 65536 00:14:43.674 } 00:14:43.674 ] 00:14:43.674 }' 00:14:43.674 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.674 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.931 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.931 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:43.931 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.931 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.931 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 968c4a79-4805-44ee-80ea-26949bde1ab7 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.190 [2024-11-20 13:28:25.664987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:44.190 [2024-11-20 13:28:25.665071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:44.190 [2024-11-20 13:28:25.665082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:44.190 [2024-11-20 13:28:25.665395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:44.190 [2024-11-20 13:28:25.665989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:44.190 [2024-11-20 13:28:25.666038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:44.190 [2024-11-20 13:28:25.666280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.190 NewBaseBdev 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.190 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.191 [ 00:14:44.191 { 00:14:44.191 "name": "NewBaseBdev", 00:14:44.191 "aliases": [ 00:14:44.191 "968c4a79-4805-44ee-80ea-26949bde1ab7" 00:14:44.191 ], 00:14:44.191 "product_name": "Malloc disk", 00:14:44.191 "block_size": 512, 00:14:44.191 "num_blocks": 65536, 00:14:44.191 "uuid": "968c4a79-4805-44ee-80ea-26949bde1ab7", 00:14:44.191 "assigned_rate_limits": { 00:14:44.191 "rw_ios_per_sec": 0, 00:14:44.191 "rw_mbytes_per_sec": 0, 00:14:44.191 "r_mbytes_per_sec": 0, 00:14:44.191 "w_mbytes_per_sec": 0 00:14:44.191 }, 00:14:44.191 "claimed": true, 00:14:44.191 "claim_type": "exclusive_write", 00:14:44.191 "zoned": false, 00:14:44.191 "supported_io_types": { 00:14:44.191 "read": true, 00:14:44.191 "write": true, 00:14:44.191 "unmap": true, 00:14:44.191 "flush": true, 00:14:44.191 "reset": true, 00:14:44.191 "nvme_admin": false, 00:14:44.191 "nvme_io": false, 00:14:44.191 "nvme_io_md": false, 00:14:44.191 "write_zeroes": true, 00:14:44.191 "zcopy": true, 00:14:44.191 "get_zone_info": false, 00:14:44.191 "zone_management": false, 00:14:44.191 "zone_append": false, 00:14:44.191 "compare": false, 00:14:44.191 "compare_and_write": false, 00:14:44.191 "abort": true, 00:14:44.191 "seek_hole": false, 00:14:44.191 "seek_data": false, 00:14:44.191 "copy": true, 00:14:44.191 "nvme_iov_md": false 00:14:44.191 }, 00:14:44.191 "memory_domains": [ 00:14:44.191 { 00:14:44.191 "dma_device_id": "system", 00:14:44.191 "dma_device_type": 1 00:14:44.191 }, 00:14:44.191 { 00:14:44.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.191 "dma_device_type": 2 00:14:44.191 } 00:14:44.191 ], 00:14:44.191 "driver_specific": {} 00:14:44.191 } 00:14:44.191 ] 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.191 "name": "Existed_Raid", 00:14:44.191 "uuid": "526238d9-6bb9-48f6-9752-d5eb1f8fac95", 00:14:44.191 "strip_size_kb": 64, 00:14:44.191 "state": "online", 00:14:44.191 "raid_level": "raid5f", 00:14:44.191 "superblock": false, 00:14:44.191 "num_base_bdevs": 4, 00:14:44.191 "num_base_bdevs_discovered": 4, 00:14:44.191 "num_base_bdevs_operational": 4, 00:14:44.191 "base_bdevs_list": [ 00:14:44.191 { 00:14:44.191 "name": "NewBaseBdev", 00:14:44.191 "uuid": "968c4a79-4805-44ee-80ea-26949bde1ab7", 00:14:44.191 "is_configured": true, 00:14:44.191 "data_offset": 0, 00:14:44.191 "data_size": 65536 00:14:44.191 }, 00:14:44.191 { 00:14:44.191 "name": "BaseBdev2", 00:14:44.191 "uuid": "3494611b-046d-4576-85ac-6e1b670b7322", 00:14:44.191 "is_configured": true, 00:14:44.191 "data_offset": 0, 00:14:44.191 "data_size": 65536 00:14:44.191 }, 00:14:44.191 { 00:14:44.191 "name": "BaseBdev3", 00:14:44.191 "uuid": "99fdbf79-060c-46a4-8ac5-a597a55515e1", 00:14:44.191 "is_configured": true, 00:14:44.191 "data_offset": 0, 00:14:44.191 "data_size": 65536 00:14:44.191 }, 00:14:44.191 { 00:14:44.191 "name": "BaseBdev4", 00:14:44.191 "uuid": "01627dbf-d648-463a-bd5b-51f88a0d99a4", 00:14:44.191 "is_configured": true, 00:14:44.191 "data_offset": 0, 00:14:44.191 "data_size": 65536 00:14:44.191 } 00:14:44.191 ] 00:14:44.191 }' 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.191 13:28:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.757 [2024-11-20 13:28:26.196446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:44.757 "name": "Existed_Raid", 00:14:44.757 "aliases": [ 00:14:44.757 "526238d9-6bb9-48f6-9752-d5eb1f8fac95" 00:14:44.757 ], 00:14:44.757 "product_name": "Raid Volume", 00:14:44.757 "block_size": 512, 00:14:44.757 "num_blocks": 196608, 00:14:44.757 "uuid": "526238d9-6bb9-48f6-9752-d5eb1f8fac95", 00:14:44.757 "assigned_rate_limits": { 00:14:44.757 "rw_ios_per_sec": 0, 00:14:44.757 "rw_mbytes_per_sec": 0, 00:14:44.757 "r_mbytes_per_sec": 0, 00:14:44.757 "w_mbytes_per_sec": 0 00:14:44.757 }, 00:14:44.757 "claimed": false, 00:14:44.757 "zoned": false, 00:14:44.757 "supported_io_types": { 00:14:44.757 "read": true, 00:14:44.757 "write": true, 00:14:44.757 "unmap": false, 00:14:44.757 "flush": false, 00:14:44.757 "reset": true, 00:14:44.757 "nvme_admin": false, 00:14:44.757 "nvme_io": false, 00:14:44.757 "nvme_io_md": false, 00:14:44.757 "write_zeroes": true, 00:14:44.757 "zcopy": false, 00:14:44.757 "get_zone_info": false, 00:14:44.757 "zone_management": false, 00:14:44.757 "zone_append": false, 00:14:44.757 "compare": false, 00:14:44.757 "compare_and_write": false, 00:14:44.757 "abort": false, 00:14:44.757 "seek_hole": false, 00:14:44.757 "seek_data": false, 00:14:44.757 "copy": false, 00:14:44.757 "nvme_iov_md": false 00:14:44.757 }, 00:14:44.757 "driver_specific": { 00:14:44.757 "raid": { 00:14:44.757 "uuid": "526238d9-6bb9-48f6-9752-d5eb1f8fac95", 00:14:44.757 "strip_size_kb": 64, 00:14:44.757 "state": "online", 00:14:44.757 "raid_level": "raid5f", 00:14:44.757 "superblock": false, 00:14:44.757 "num_base_bdevs": 4, 00:14:44.757 "num_base_bdevs_discovered": 4, 00:14:44.757 "num_base_bdevs_operational": 4, 00:14:44.757 "base_bdevs_list": [ 00:14:44.757 { 00:14:44.757 "name": "NewBaseBdev", 00:14:44.757 "uuid": "968c4a79-4805-44ee-80ea-26949bde1ab7", 00:14:44.757 "is_configured": true, 00:14:44.757 "data_offset": 0, 00:14:44.757 "data_size": 65536 00:14:44.757 }, 00:14:44.757 { 00:14:44.757 "name": "BaseBdev2", 00:14:44.757 "uuid": "3494611b-046d-4576-85ac-6e1b670b7322", 00:14:44.757 "is_configured": true, 00:14:44.757 "data_offset": 0, 00:14:44.757 "data_size": 65536 00:14:44.757 }, 00:14:44.757 { 00:14:44.757 "name": "BaseBdev3", 00:14:44.757 "uuid": "99fdbf79-060c-46a4-8ac5-a597a55515e1", 00:14:44.757 "is_configured": true, 00:14:44.757 "data_offset": 0, 00:14:44.757 "data_size": 65536 00:14:44.757 }, 00:14:44.757 { 00:14:44.757 "name": "BaseBdev4", 00:14:44.757 "uuid": "01627dbf-d648-463a-bd5b-51f88a0d99a4", 00:14:44.757 "is_configured": true, 00:14:44.757 "data_offset": 0, 00:14:44.757 "data_size": 65536 00:14:44.757 } 00:14:44.757 ] 00:14:44.757 } 00:14:44.757 } 00:14:44.757 }' 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:44.757 BaseBdev2 00:14:44.757 BaseBdev3 00:14:44.757 BaseBdev4' 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.757 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.016 [2024-11-20 13:28:26.499745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.016 [2024-11-20 13:28:26.499801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.016 [2024-11-20 13:28:26.499920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.016 [2024-11-20 13:28:26.500264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.016 [2024-11-20 13:28:26.500287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93007 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 93007 ']' 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 93007 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93007 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93007' 00:14:45.016 killing process with pid 93007 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 93007 00:14:45.016 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 93007 00:14:45.016 [2024-11-20 13:28:26.547945] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.016 [2024-11-20 13:28:26.592631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:45.274 00:14:45.274 real 0m9.305s 00:14:45.274 user 0m16.353s 00:14:45.274 sys 0m1.865s 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.274 ************************************ 00:14:45.274 END TEST raid5f_state_function_test 00:14:45.274 ************************************ 00:14:45.274 13:28:26 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:45.274 13:28:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:45.274 13:28:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.274 13:28:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:45.274 ************************************ 00:14:45.274 START TEST raid5f_state_function_test_sb 00:14:45.274 ************************************ 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:45.274 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:45.275 Process raid pid: 93649 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93649 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93649' 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93649 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 93649 ']' 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.275 13:28:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.533 [2024-11-20 13:28:26.971171] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:45.533 [2024-11-20 13:28:26.971316] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.533 [2024-11-20 13:28:27.132874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.533 [2024-11-20 13:28:27.167584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.792 [2024-11-20 13:28:27.215040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.792 [2024-11-20 13:28:27.215096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.360 [2024-11-20 13:28:27.895700] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.360 [2024-11-20 13:28:27.895780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.360 [2024-11-20 13:28:27.895804] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.360 [2024-11-20 13:28:27.895824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.360 [2024-11-20 13:28:27.895834] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:46.360 [2024-11-20 13:28:27.895851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:46.360 [2024-11-20 13:28:27.895861] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:46.360 [2024-11-20 13:28:27.895875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.360 "name": "Existed_Raid", 00:14:46.360 "uuid": "4358dc40-919f-4364-86b3-1d8a10d2e690", 00:14:46.360 "strip_size_kb": 64, 00:14:46.360 "state": "configuring", 00:14:46.360 "raid_level": "raid5f", 00:14:46.360 "superblock": true, 00:14:46.360 "num_base_bdevs": 4, 00:14:46.360 "num_base_bdevs_discovered": 0, 00:14:46.360 "num_base_bdevs_operational": 4, 00:14:46.360 "base_bdevs_list": [ 00:14:46.360 { 00:14:46.360 "name": "BaseBdev1", 00:14:46.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.360 "is_configured": false, 00:14:46.360 "data_offset": 0, 00:14:46.360 "data_size": 0 00:14:46.360 }, 00:14:46.360 { 00:14:46.360 "name": "BaseBdev2", 00:14:46.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.360 "is_configured": false, 00:14:46.360 "data_offset": 0, 00:14:46.360 "data_size": 0 00:14:46.360 }, 00:14:46.360 { 00:14:46.360 "name": "BaseBdev3", 00:14:46.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.360 "is_configured": false, 00:14:46.360 "data_offset": 0, 00:14:46.360 "data_size": 0 00:14:46.360 }, 00:14:46.360 { 00:14:46.360 "name": "BaseBdev4", 00:14:46.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.360 "is_configured": false, 00:14:46.360 "data_offset": 0, 00:14:46.360 "data_size": 0 00:14:46.360 } 00:14:46.360 ] 00:14:46.360 }' 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.360 13:28:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.928 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.929 [2024-11-20 13:28:28.374821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:46.929 [2024-11-20 13:28:28.375018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.929 [2024-11-20 13:28:28.386836] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.929 [2024-11-20 13:28:28.387041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.929 [2024-11-20 13:28:28.387097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.929 [2024-11-20 13:28:28.387148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.929 [2024-11-20 13:28:28.387187] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:46.929 [2024-11-20 13:28:28.387233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:46.929 [2024-11-20 13:28:28.387271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:46.929 [2024-11-20 13:28:28.387348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.929 [2024-11-20 13:28:28.409278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.929 BaseBdev1 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.929 [ 00:14:46.929 { 00:14:46.929 "name": "BaseBdev1", 00:14:46.929 "aliases": [ 00:14:46.929 "1b5ea438-166c-47b9-9247-6d1cf3adbbee" 00:14:46.929 ], 00:14:46.929 "product_name": "Malloc disk", 00:14:46.929 "block_size": 512, 00:14:46.929 "num_blocks": 65536, 00:14:46.929 "uuid": "1b5ea438-166c-47b9-9247-6d1cf3adbbee", 00:14:46.929 "assigned_rate_limits": { 00:14:46.929 "rw_ios_per_sec": 0, 00:14:46.929 "rw_mbytes_per_sec": 0, 00:14:46.929 "r_mbytes_per_sec": 0, 00:14:46.929 "w_mbytes_per_sec": 0 00:14:46.929 }, 00:14:46.929 "claimed": true, 00:14:46.929 "claim_type": "exclusive_write", 00:14:46.929 "zoned": false, 00:14:46.929 "supported_io_types": { 00:14:46.929 "read": true, 00:14:46.929 "write": true, 00:14:46.929 "unmap": true, 00:14:46.929 "flush": true, 00:14:46.929 "reset": true, 00:14:46.929 "nvme_admin": false, 00:14:46.929 "nvme_io": false, 00:14:46.929 "nvme_io_md": false, 00:14:46.929 "write_zeroes": true, 00:14:46.929 "zcopy": true, 00:14:46.929 "get_zone_info": false, 00:14:46.929 "zone_management": false, 00:14:46.929 "zone_append": false, 00:14:46.929 "compare": false, 00:14:46.929 "compare_and_write": false, 00:14:46.929 "abort": true, 00:14:46.929 "seek_hole": false, 00:14:46.929 "seek_data": false, 00:14:46.929 "copy": true, 00:14:46.929 "nvme_iov_md": false 00:14:46.929 }, 00:14:46.929 "memory_domains": [ 00:14:46.929 { 00:14:46.929 "dma_device_id": "system", 00:14:46.929 "dma_device_type": 1 00:14:46.929 }, 00:14:46.929 { 00:14:46.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.929 "dma_device_type": 2 00:14:46.929 } 00:14:46.929 ], 00:14:46.929 "driver_specific": {} 00:14:46.929 } 00:14:46.929 ] 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.929 "name": "Existed_Raid", 00:14:46.929 "uuid": "6157dfcc-4051-4f16-9487-70bf1d485137", 00:14:46.929 "strip_size_kb": 64, 00:14:46.929 "state": "configuring", 00:14:46.929 "raid_level": "raid5f", 00:14:46.929 "superblock": true, 00:14:46.929 "num_base_bdevs": 4, 00:14:46.929 "num_base_bdevs_discovered": 1, 00:14:46.929 "num_base_bdevs_operational": 4, 00:14:46.929 "base_bdevs_list": [ 00:14:46.929 { 00:14:46.929 "name": "BaseBdev1", 00:14:46.929 "uuid": "1b5ea438-166c-47b9-9247-6d1cf3adbbee", 00:14:46.929 "is_configured": true, 00:14:46.929 "data_offset": 2048, 00:14:46.929 "data_size": 63488 00:14:46.929 }, 00:14:46.929 { 00:14:46.929 "name": "BaseBdev2", 00:14:46.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.929 "is_configured": false, 00:14:46.929 "data_offset": 0, 00:14:46.929 "data_size": 0 00:14:46.929 }, 00:14:46.929 { 00:14:46.929 "name": "BaseBdev3", 00:14:46.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.929 "is_configured": false, 00:14:46.929 "data_offset": 0, 00:14:46.929 "data_size": 0 00:14:46.929 }, 00:14:46.929 { 00:14:46.929 "name": "BaseBdev4", 00:14:46.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.929 "is_configured": false, 00:14:46.929 "data_offset": 0, 00:14:46.929 "data_size": 0 00:14:46.929 } 00:14:46.929 ] 00:14:46.929 }' 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.929 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.499 [2024-11-20 13:28:28.920920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.499 [2024-11-20 13:28:28.921150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.499 [2024-11-20 13:28:28.932983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.499 [2024-11-20 13:28:28.935382] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.499 [2024-11-20 13:28:28.935539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.499 [2024-11-20 13:28:28.935584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:47.499 [2024-11-20 13:28:28.935619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:47.499 [2024-11-20 13:28:28.935677] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:47.499 [2024-11-20 13:28:28.935710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.499 "name": "Existed_Raid", 00:14:47.499 "uuid": "5f70210e-8af7-4acf-86fa-13759f0a372e", 00:14:47.499 "strip_size_kb": 64, 00:14:47.499 "state": "configuring", 00:14:47.499 "raid_level": "raid5f", 00:14:47.499 "superblock": true, 00:14:47.499 "num_base_bdevs": 4, 00:14:47.499 "num_base_bdevs_discovered": 1, 00:14:47.499 "num_base_bdevs_operational": 4, 00:14:47.499 "base_bdevs_list": [ 00:14:47.499 { 00:14:47.499 "name": "BaseBdev1", 00:14:47.499 "uuid": "1b5ea438-166c-47b9-9247-6d1cf3adbbee", 00:14:47.499 "is_configured": true, 00:14:47.499 "data_offset": 2048, 00:14:47.499 "data_size": 63488 00:14:47.499 }, 00:14:47.499 { 00:14:47.499 "name": "BaseBdev2", 00:14:47.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.499 "is_configured": false, 00:14:47.499 "data_offset": 0, 00:14:47.499 "data_size": 0 00:14:47.499 }, 00:14:47.499 { 00:14:47.499 "name": "BaseBdev3", 00:14:47.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.499 "is_configured": false, 00:14:47.499 "data_offset": 0, 00:14:47.499 "data_size": 0 00:14:47.499 }, 00:14:47.499 { 00:14:47.499 "name": "BaseBdev4", 00:14:47.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.499 "is_configured": false, 00:14:47.499 "data_offset": 0, 00:14:47.499 "data_size": 0 00:14:47.499 } 00:14:47.499 ] 00:14:47.499 }' 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.499 13:28:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.758 [2024-11-20 13:28:29.347819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.758 BaseBdev2 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.758 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.758 [ 00:14:47.758 { 00:14:47.758 "name": "BaseBdev2", 00:14:47.758 "aliases": [ 00:14:47.758 "68d955e1-a3e0-4e1a-acbb-0a5f849be8b1" 00:14:47.758 ], 00:14:47.758 "product_name": "Malloc disk", 00:14:47.758 "block_size": 512, 00:14:47.758 "num_blocks": 65536, 00:14:47.758 "uuid": "68d955e1-a3e0-4e1a-acbb-0a5f849be8b1", 00:14:47.758 "assigned_rate_limits": { 00:14:47.758 "rw_ios_per_sec": 0, 00:14:47.758 "rw_mbytes_per_sec": 0, 00:14:47.758 "r_mbytes_per_sec": 0, 00:14:47.758 "w_mbytes_per_sec": 0 00:14:47.758 }, 00:14:47.758 "claimed": true, 00:14:47.758 "claim_type": "exclusive_write", 00:14:47.758 "zoned": false, 00:14:47.758 "supported_io_types": { 00:14:47.758 "read": true, 00:14:47.758 "write": true, 00:14:47.758 "unmap": true, 00:14:47.758 "flush": true, 00:14:47.758 "reset": true, 00:14:47.758 "nvme_admin": false, 00:14:47.758 "nvme_io": false, 00:14:47.758 "nvme_io_md": false, 00:14:47.758 "write_zeroes": true, 00:14:47.758 "zcopy": true, 00:14:47.758 "get_zone_info": false, 00:14:47.758 "zone_management": false, 00:14:47.758 "zone_append": false, 00:14:47.758 "compare": false, 00:14:47.758 "compare_and_write": false, 00:14:47.758 "abort": true, 00:14:47.758 "seek_hole": false, 00:14:47.758 "seek_data": false, 00:14:47.758 "copy": true, 00:14:47.758 "nvme_iov_md": false 00:14:47.758 }, 00:14:47.758 "memory_domains": [ 00:14:47.758 { 00:14:47.758 "dma_device_id": "system", 00:14:47.758 "dma_device_type": 1 00:14:47.758 }, 00:14:47.758 { 00:14:47.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.758 "dma_device_type": 2 00:14:47.758 } 00:14:47.758 ], 00:14:47.758 "driver_specific": {} 00:14:47.758 } 00:14:47.758 ] 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.759 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.018 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.018 "name": "Existed_Raid", 00:14:48.018 "uuid": "5f70210e-8af7-4acf-86fa-13759f0a372e", 00:14:48.018 "strip_size_kb": 64, 00:14:48.018 "state": "configuring", 00:14:48.018 "raid_level": "raid5f", 00:14:48.018 "superblock": true, 00:14:48.018 "num_base_bdevs": 4, 00:14:48.018 "num_base_bdevs_discovered": 2, 00:14:48.018 "num_base_bdevs_operational": 4, 00:14:48.018 "base_bdevs_list": [ 00:14:48.018 { 00:14:48.018 "name": "BaseBdev1", 00:14:48.018 "uuid": "1b5ea438-166c-47b9-9247-6d1cf3adbbee", 00:14:48.018 "is_configured": true, 00:14:48.018 "data_offset": 2048, 00:14:48.018 "data_size": 63488 00:14:48.018 }, 00:14:48.018 { 00:14:48.018 "name": "BaseBdev2", 00:14:48.018 "uuid": "68d955e1-a3e0-4e1a-acbb-0a5f849be8b1", 00:14:48.018 "is_configured": true, 00:14:48.018 "data_offset": 2048, 00:14:48.018 "data_size": 63488 00:14:48.018 }, 00:14:48.018 { 00:14:48.018 "name": "BaseBdev3", 00:14:48.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.018 "is_configured": false, 00:14:48.018 "data_offset": 0, 00:14:48.018 "data_size": 0 00:14:48.018 }, 00:14:48.018 { 00:14:48.018 "name": "BaseBdev4", 00:14:48.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.018 "is_configured": false, 00:14:48.018 "data_offset": 0, 00:14:48.018 "data_size": 0 00:14:48.018 } 00:14:48.018 ] 00:14:48.018 }' 00:14:48.018 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.018 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.278 [2024-11-20 13:28:29.857639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.278 BaseBdev3 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.278 [ 00:14:48.278 { 00:14:48.278 "name": "BaseBdev3", 00:14:48.278 "aliases": [ 00:14:48.278 "df2c05d0-7267-4ac9-a3a6-43a40b8a5cf5" 00:14:48.278 ], 00:14:48.278 "product_name": "Malloc disk", 00:14:48.278 "block_size": 512, 00:14:48.278 "num_blocks": 65536, 00:14:48.278 "uuid": "df2c05d0-7267-4ac9-a3a6-43a40b8a5cf5", 00:14:48.278 "assigned_rate_limits": { 00:14:48.278 "rw_ios_per_sec": 0, 00:14:48.278 "rw_mbytes_per_sec": 0, 00:14:48.278 "r_mbytes_per_sec": 0, 00:14:48.278 "w_mbytes_per_sec": 0 00:14:48.278 }, 00:14:48.278 "claimed": true, 00:14:48.278 "claim_type": "exclusive_write", 00:14:48.278 "zoned": false, 00:14:48.278 "supported_io_types": { 00:14:48.278 "read": true, 00:14:48.278 "write": true, 00:14:48.278 "unmap": true, 00:14:48.278 "flush": true, 00:14:48.278 "reset": true, 00:14:48.278 "nvme_admin": false, 00:14:48.278 "nvme_io": false, 00:14:48.278 "nvme_io_md": false, 00:14:48.278 "write_zeroes": true, 00:14:48.278 "zcopy": true, 00:14:48.278 "get_zone_info": false, 00:14:48.278 "zone_management": false, 00:14:48.278 "zone_append": false, 00:14:48.278 "compare": false, 00:14:48.278 "compare_and_write": false, 00:14:48.278 "abort": true, 00:14:48.278 "seek_hole": false, 00:14:48.278 "seek_data": false, 00:14:48.278 "copy": true, 00:14:48.278 "nvme_iov_md": false 00:14:48.278 }, 00:14:48.278 "memory_domains": [ 00:14:48.278 { 00:14:48.278 "dma_device_id": "system", 00:14:48.278 "dma_device_type": 1 00:14:48.278 }, 00:14:48.278 { 00:14:48.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.278 "dma_device_type": 2 00:14:48.278 } 00:14:48.278 ], 00:14:48.278 "driver_specific": {} 00:14:48.278 } 00:14:48.278 ] 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.278 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.538 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.538 "name": "Existed_Raid", 00:14:48.538 "uuid": "5f70210e-8af7-4acf-86fa-13759f0a372e", 00:14:48.538 "strip_size_kb": 64, 00:14:48.538 "state": "configuring", 00:14:48.538 "raid_level": "raid5f", 00:14:48.538 "superblock": true, 00:14:48.538 "num_base_bdevs": 4, 00:14:48.538 "num_base_bdevs_discovered": 3, 00:14:48.538 "num_base_bdevs_operational": 4, 00:14:48.538 "base_bdevs_list": [ 00:14:48.538 { 00:14:48.538 "name": "BaseBdev1", 00:14:48.538 "uuid": "1b5ea438-166c-47b9-9247-6d1cf3adbbee", 00:14:48.538 "is_configured": true, 00:14:48.538 "data_offset": 2048, 00:14:48.538 "data_size": 63488 00:14:48.538 }, 00:14:48.538 { 00:14:48.538 "name": "BaseBdev2", 00:14:48.538 "uuid": "68d955e1-a3e0-4e1a-acbb-0a5f849be8b1", 00:14:48.538 "is_configured": true, 00:14:48.538 "data_offset": 2048, 00:14:48.538 "data_size": 63488 00:14:48.538 }, 00:14:48.538 { 00:14:48.538 "name": "BaseBdev3", 00:14:48.538 "uuid": "df2c05d0-7267-4ac9-a3a6-43a40b8a5cf5", 00:14:48.538 "is_configured": true, 00:14:48.538 "data_offset": 2048, 00:14:48.538 "data_size": 63488 00:14:48.538 }, 00:14:48.538 { 00:14:48.538 "name": "BaseBdev4", 00:14:48.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.538 "is_configured": false, 00:14:48.538 "data_offset": 0, 00:14:48.538 "data_size": 0 00:14:48.538 } 00:14:48.538 ] 00:14:48.538 }' 00:14:48.538 13:28:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.538 13:28:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.798 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:48.798 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.799 [2024-11-20 13:28:30.364754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:48.799 [2024-11-20 13:28:30.365211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:48.799 [2024-11-20 13:28:30.365292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:48.799 BaseBdev4 00:14:48.799 [2024-11-20 13:28:30.365698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:48.799 [2024-11-20 13:28:30.366368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:48.799 [2024-11-20 13:28:30.366398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.799 [2024-11-20 13:28:30.366571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.799 [ 00:14:48.799 { 00:14:48.799 "name": "BaseBdev4", 00:14:48.799 "aliases": [ 00:14:48.799 "72cc8dc5-c38a-4aca-9bd6-ddcbc1ada0f2" 00:14:48.799 ], 00:14:48.799 "product_name": "Malloc disk", 00:14:48.799 "block_size": 512, 00:14:48.799 "num_blocks": 65536, 00:14:48.799 "uuid": "72cc8dc5-c38a-4aca-9bd6-ddcbc1ada0f2", 00:14:48.799 "assigned_rate_limits": { 00:14:48.799 "rw_ios_per_sec": 0, 00:14:48.799 "rw_mbytes_per_sec": 0, 00:14:48.799 "r_mbytes_per_sec": 0, 00:14:48.799 "w_mbytes_per_sec": 0 00:14:48.799 }, 00:14:48.799 "claimed": true, 00:14:48.799 "claim_type": "exclusive_write", 00:14:48.799 "zoned": false, 00:14:48.799 "supported_io_types": { 00:14:48.799 "read": true, 00:14:48.799 "write": true, 00:14:48.799 "unmap": true, 00:14:48.799 "flush": true, 00:14:48.799 "reset": true, 00:14:48.799 "nvme_admin": false, 00:14:48.799 "nvme_io": false, 00:14:48.799 "nvme_io_md": false, 00:14:48.799 "write_zeroes": true, 00:14:48.799 "zcopy": true, 00:14:48.799 "get_zone_info": false, 00:14:48.799 "zone_management": false, 00:14:48.799 "zone_append": false, 00:14:48.799 "compare": false, 00:14:48.799 "compare_and_write": false, 00:14:48.799 "abort": true, 00:14:48.799 "seek_hole": false, 00:14:48.799 "seek_data": false, 00:14:48.799 "copy": true, 00:14:48.799 "nvme_iov_md": false 00:14:48.799 }, 00:14:48.799 "memory_domains": [ 00:14:48.799 { 00:14:48.799 "dma_device_id": "system", 00:14:48.799 "dma_device_type": 1 00:14:48.799 }, 00:14:48.799 { 00:14:48.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.799 "dma_device_type": 2 00:14:48.799 } 00:14:48.799 ], 00:14:48.799 "driver_specific": {} 00:14:48.799 } 00:14:48.799 ] 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.799 "name": "Existed_Raid", 00:14:48.799 "uuid": "5f70210e-8af7-4acf-86fa-13759f0a372e", 00:14:48.799 "strip_size_kb": 64, 00:14:48.799 "state": "online", 00:14:48.799 "raid_level": "raid5f", 00:14:48.799 "superblock": true, 00:14:48.799 "num_base_bdevs": 4, 00:14:48.799 "num_base_bdevs_discovered": 4, 00:14:48.799 "num_base_bdevs_operational": 4, 00:14:48.799 "base_bdevs_list": [ 00:14:48.799 { 00:14:48.799 "name": "BaseBdev1", 00:14:48.799 "uuid": "1b5ea438-166c-47b9-9247-6d1cf3adbbee", 00:14:48.799 "is_configured": true, 00:14:48.799 "data_offset": 2048, 00:14:48.799 "data_size": 63488 00:14:48.799 }, 00:14:48.799 { 00:14:48.799 "name": "BaseBdev2", 00:14:48.799 "uuid": "68d955e1-a3e0-4e1a-acbb-0a5f849be8b1", 00:14:48.799 "is_configured": true, 00:14:48.799 "data_offset": 2048, 00:14:48.799 "data_size": 63488 00:14:48.799 }, 00:14:48.799 { 00:14:48.799 "name": "BaseBdev3", 00:14:48.799 "uuid": "df2c05d0-7267-4ac9-a3a6-43a40b8a5cf5", 00:14:48.799 "is_configured": true, 00:14:48.799 "data_offset": 2048, 00:14:48.799 "data_size": 63488 00:14:48.799 }, 00:14:48.799 { 00:14:48.799 "name": "BaseBdev4", 00:14:48.799 "uuid": "72cc8dc5-c38a-4aca-9bd6-ddcbc1ada0f2", 00:14:48.799 "is_configured": true, 00:14:48.799 "data_offset": 2048, 00:14:48.799 "data_size": 63488 00:14:48.799 } 00:14:48.799 ] 00:14:48.799 }' 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.799 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.370 [2024-11-20 13:28:30.872360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.370 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:49.370 "name": "Existed_Raid", 00:14:49.370 "aliases": [ 00:14:49.370 "5f70210e-8af7-4acf-86fa-13759f0a372e" 00:14:49.370 ], 00:14:49.370 "product_name": "Raid Volume", 00:14:49.370 "block_size": 512, 00:14:49.370 "num_blocks": 190464, 00:14:49.370 "uuid": "5f70210e-8af7-4acf-86fa-13759f0a372e", 00:14:49.370 "assigned_rate_limits": { 00:14:49.370 "rw_ios_per_sec": 0, 00:14:49.370 "rw_mbytes_per_sec": 0, 00:14:49.370 "r_mbytes_per_sec": 0, 00:14:49.370 "w_mbytes_per_sec": 0 00:14:49.370 }, 00:14:49.370 "claimed": false, 00:14:49.370 "zoned": false, 00:14:49.370 "supported_io_types": { 00:14:49.370 "read": true, 00:14:49.370 "write": true, 00:14:49.370 "unmap": false, 00:14:49.370 "flush": false, 00:14:49.370 "reset": true, 00:14:49.370 "nvme_admin": false, 00:14:49.370 "nvme_io": false, 00:14:49.370 "nvme_io_md": false, 00:14:49.370 "write_zeroes": true, 00:14:49.370 "zcopy": false, 00:14:49.370 "get_zone_info": false, 00:14:49.370 "zone_management": false, 00:14:49.370 "zone_append": false, 00:14:49.370 "compare": false, 00:14:49.370 "compare_and_write": false, 00:14:49.370 "abort": false, 00:14:49.370 "seek_hole": false, 00:14:49.370 "seek_data": false, 00:14:49.370 "copy": false, 00:14:49.370 "nvme_iov_md": false 00:14:49.370 }, 00:14:49.370 "driver_specific": { 00:14:49.370 "raid": { 00:14:49.370 "uuid": "5f70210e-8af7-4acf-86fa-13759f0a372e", 00:14:49.370 "strip_size_kb": 64, 00:14:49.370 "state": "online", 00:14:49.370 "raid_level": "raid5f", 00:14:49.370 "superblock": true, 00:14:49.370 "num_base_bdevs": 4, 00:14:49.370 "num_base_bdevs_discovered": 4, 00:14:49.370 "num_base_bdevs_operational": 4, 00:14:49.370 "base_bdevs_list": [ 00:14:49.370 { 00:14:49.370 "name": "BaseBdev1", 00:14:49.370 "uuid": "1b5ea438-166c-47b9-9247-6d1cf3adbbee", 00:14:49.370 "is_configured": true, 00:14:49.370 "data_offset": 2048, 00:14:49.370 "data_size": 63488 00:14:49.370 }, 00:14:49.370 { 00:14:49.370 "name": "BaseBdev2", 00:14:49.370 "uuid": "68d955e1-a3e0-4e1a-acbb-0a5f849be8b1", 00:14:49.370 "is_configured": true, 00:14:49.370 "data_offset": 2048, 00:14:49.370 "data_size": 63488 00:14:49.371 }, 00:14:49.371 { 00:14:49.371 "name": "BaseBdev3", 00:14:49.371 "uuid": "df2c05d0-7267-4ac9-a3a6-43a40b8a5cf5", 00:14:49.371 "is_configured": true, 00:14:49.371 "data_offset": 2048, 00:14:49.371 "data_size": 63488 00:14:49.371 }, 00:14:49.371 { 00:14:49.371 "name": "BaseBdev4", 00:14:49.371 "uuid": "72cc8dc5-c38a-4aca-9bd6-ddcbc1ada0f2", 00:14:49.371 "is_configured": true, 00:14:49.371 "data_offset": 2048, 00:14:49.371 "data_size": 63488 00:14:49.371 } 00:14:49.371 ] 00:14:49.371 } 00:14:49.371 } 00:14:49.371 }' 00:14:49.371 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.371 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:49.371 BaseBdev2 00:14:49.371 BaseBdev3 00:14:49.371 BaseBdev4' 00:14:49.371 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.371 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:49.371 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.371 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.371 13:28:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:49.371 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.371 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.371 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.371 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.371 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.371 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.371 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:49.371 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.371 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.371 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.631 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.631 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.631 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.631 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.631 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:49.631 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.631 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.631 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.631 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.632 [2024-11-20 13:28:31.203766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.632 "name": "Existed_Raid", 00:14:49.632 "uuid": "5f70210e-8af7-4acf-86fa-13759f0a372e", 00:14:49.632 "strip_size_kb": 64, 00:14:49.632 "state": "online", 00:14:49.632 "raid_level": "raid5f", 00:14:49.632 "superblock": true, 00:14:49.632 "num_base_bdevs": 4, 00:14:49.632 "num_base_bdevs_discovered": 3, 00:14:49.632 "num_base_bdevs_operational": 3, 00:14:49.632 "base_bdevs_list": [ 00:14:49.632 { 00:14:49.632 "name": null, 00:14:49.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.632 "is_configured": false, 00:14:49.632 "data_offset": 0, 00:14:49.632 "data_size": 63488 00:14:49.632 }, 00:14:49.632 { 00:14:49.632 "name": "BaseBdev2", 00:14:49.632 "uuid": "68d955e1-a3e0-4e1a-acbb-0a5f849be8b1", 00:14:49.632 "is_configured": true, 00:14:49.632 "data_offset": 2048, 00:14:49.632 "data_size": 63488 00:14:49.632 }, 00:14:49.632 { 00:14:49.632 "name": "BaseBdev3", 00:14:49.632 "uuid": "df2c05d0-7267-4ac9-a3a6-43a40b8a5cf5", 00:14:49.632 "is_configured": true, 00:14:49.632 "data_offset": 2048, 00:14:49.632 "data_size": 63488 00:14:49.632 }, 00:14:49.632 { 00:14:49.632 "name": "BaseBdev4", 00:14:49.632 "uuid": "72cc8dc5-c38a-4aca-9bd6-ddcbc1ada0f2", 00:14:49.632 "is_configured": true, 00:14:49.632 "data_offset": 2048, 00:14:49.632 "data_size": 63488 00:14:49.632 } 00:14:49.632 ] 00:14:49.632 }' 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.632 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.202 [2024-11-20 13:28:31.711560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:50.202 [2024-11-20 13:28:31.711883] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.202 [2024-11-20 13:28:31.723943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.202 [2024-11-20 13:28:31.783975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.202 [2024-11-20 13:28:31.852218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:50.202 [2024-11-20 13:28:31.852402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:50.202 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.487 BaseBdev2 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.487 [ 00:14:50.487 { 00:14:50.487 "name": "BaseBdev2", 00:14:50.487 "aliases": [ 00:14:50.487 "f6f2aa53-03d8-476f-ba49-0751df6d005b" 00:14:50.487 ], 00:14:50.487 "product_name": "Malloc disk", 00:14:50.487 "block_size": 512, 00:14:50.487 "num_blocks": 65536, 00:14:50.487 "uuid": "f6f2aa53-03d8-476f-ba49-0751df6d005b", 00:14:50.487 "assigned_rate_limits": { 00:14:50.487 "rw_ios_per_sec": 0, 00:14:50.487 "rw_mbytes_per_sec": 0, 00:14:50.487 "r_mbytes_per_sec": 0, 00:14:50.487 "w_mbytes_per_sec": 0 00:14:50.487 }, 00:14:50.487 "claimed": false, 00:14:50.487 "zoned": false, 00:14:50.487 "supported_io_types": { 00:14:50.487 "read": true, 00:14:50.487 "write": true, 00:14:50.487 "unmap": true, 00:14:50.487 "flush": true, 00:14:50.487 "reset": true, 00:14:50.487 "nvme_admin": false, 00:14:50.487 "nvme_io": false, 00:14:50.487 "nvme_io_md": false, 00:14:50.487 "write_zeroes": true, 00:14:50.487 "zcopy": true, 00:14:50.487 "get_zone_info": false, 00:14:50.487 "zone_management": false, 00:14:50.487 "zone_append": false, 00:14:50.487 "compare": false, 00:14:50.487 "compare_and_write": false, 00:14:50.487 "abort": true, 00:14:50.487 "seek_hole": false, 00:14:50.487 "seek_data": false, 00:14:50.487 "copy": true, 00:14:50.487 "nvme_iov_md": false 00:14:50.487 }, 00:14:50.487 "memory_domains": [ 00:14:50.487 { 00:14:50.487 "dma_device_id": "system", 00:14:50.487 "dma_device_type": 1 00:14:50.487 }, 00:14:50.487 { 00:14:50.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.487 "dma_device_type": 2 00:14:50.487 } 00:14:50.487 ], 00:14:50.487 "driver_specific": {} 00:14:50.487 } 00:14:50.487 ] 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.487 BaseBdev3 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.487 13:28:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.488 [ 00:14:50.488 { 00:14:50.488 "name": "BaseBdev3", 00:14:50.488 "aliases": [ 00:14:50.488 "9ad6b85b-7abf-43b2-afd8-c9d8cef5e2df" 00:14:50.488 ], 00:14:50.488 "product_name": "Malloc disk", 00:14:50.488 "block_size": 512, 00:14:50.488 "num_blocks": 65536, 00:14:50.488 "uuid": "9ad6b85b-7abf-43b2-afd8-c9d8cef5e2df", 00:14:50.488 "assigned_rate_limits": { 00:14:50.488 "rw_ios_per_sec": 0, 00:14:50.488 "rw_mbytes_per_sec": 0, 00:14:50.488 "r_mbytes_per_sec": 0, 00:14:50.488 "w_mbytes_per_sec": 0 00:14:50.488 }, 00:14:50.488 "claimed": false, 00:14:50.488 "zoned": false, 00:14:50.488 "supported_io_types": { 00:14:50.488 "read": true, 00:14:50.488 "write": true, 00:14:50.488 "unmap": true, 00:14:50.488 "flush": true, 00:14:50.488 "reset": true, 00:14:50.488 "nvme_admin": false, 00:14:50.488 "nvme_io": false, 00:14:50.488 "nvme_io_md": false, 00:14:50.488 "write_zeroes": true, 00:14:50.488 "zcopy": true, 00:14:50.488 "get_zone_info": false, 00:14:50.488 "zone_management": false, 00:14:50.488 "zone_append": false, 00:14:50.488 "compare": false, 00:14:50.488 "compare_and_write": false, 00:14:50.488 "abort": true, 00:14:50.488 "seek_hole": false, 00:14:50.488 "seek_data": false, 00:14:50.488 "copy": true, 00:14:50.488 "nvme_iov_md": false 00:14:50.488 }, 00:14:50.488 "memory_domains": [ 00:14:50.488 { 00:14:50.488 "dma_device_id": "system", 00:14:50.488 "dma_device_type": 1 00:14:50.488 }, 00:14:50.488 { 00:14:50.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.488 "dma_device_type": 2 00:14:50.488 } 00:14:50.488 ], 00:14:50.488 "driver_specific": {} 00:14:50.488 } 00:14:50.488 ] 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.488 BaseBdev4 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.488 [ 00:14:50.488 { 00:14:50.488 "name": "BaseBdev4", 00:14:50.488 "aliases": [ 00:14:50.488 "7450adf6-748b-4984-aa64-eefed5134028" 00:14:50.488 ], 00:14:50.488 "product_name": "Malloc disk", 00:14:50.488 "block_size": 512, 00:14:50.488 "num_blocks": 65536, 00:14:50.488 "uuid": "7450adf6-748b-4984-aa64-eefed5134028", 00:14:50.488 "assigned_rate_limits": { 00:14:50.488 "rw_ios_per_sec": 0, 00:14:50.488 "rw_mbytes_per_sec": 0, 00:14:50.488 "r_mbytes_per_sec": 0, 00:14:50.488 "w_mbytes_per_sec": 0 00:14:50.488 }, 00:14:50.488 "claimed": false, 00:14:50.488 "zoned": false, 00:14:50.488 "supported_io_types": { 00:14:50.488 "read": true, 00:14:50.488 "write": true, 00:14:50.488 "unmap": true, 00:14:50.488 "flush": true, 00:14:50.488 "reset": true, 00:14:50.488 "nvme_admin": false, 00:14:50.488 "nvme_io": false, 00:14:50.488 "nvme_io_md": false, 00:14:50.488 "write_zeroes": true, 00:14:50.488 "zcopy": true, 00:14:50.488 "get_zone_info": false, 00:14:50.488 "zone_management": false, 00:14:50.488 "zone_append": false, 00:14:50.488 "compare": false, 00:14:50.488 "compare_and_write": false, 00:14:50.488 "abort": true, 00:14:50.488 "seek_hole": false, 00:14:50.488 "seek_data": false, 00:14:50.488 "copy": true, 00:14:50.488 "nvme_iov_md": false 00:14:50.488 }, 00:14:50.488 "memory_domains": [ 00:14:50.488 { 00:14:50.488 "dma_device_id": "system", 00:14:50.488 "dma_device_type": 1 00:14:50.488 }, 00:14:50.488 { 00:14:50.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.488 "dma_device_type": 2 00:14:50.488 } 00:14:50.488 ], 00:14:50.488 "driver_specific": {} 00:14:50.488 } 00:14:50.488 ] 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.488 [2024-11-20 13:28:32.086648] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.488 [2024-11-20 13:28:32.086825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.488 [2024-11-20 13:28:32.086902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.488 [2024-11-20 13:28:32.089172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.488 [2024-11-20 13:28:32.089309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.488 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.489 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.489 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.489 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.489 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.748 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.748 "name": "Existed_Raid", 00:14:50.748 "uuid": "831d854d-77cf-429c-a0b5-eafb1cbb5de5", 00:14:50.748 "strip_size_kb": 64, 00:14:50.748 "state": "configuring", 00:14:50.748 "raid_level": "raid5f", 00:14:50.748 "superblock": true, 00:14:50.748 "num_base_bdevs": 4, 00:14:50.748 "num_base_bdevs_discovered": 3, 00:14:50.748 "num_base_bdevs_operational": 4, 00:14:50.748 "base_bdevs_list": [ 00:14:50.748 { 00:14:50.748 "name": "BaseBdev1", 00:14:50.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.748 "is_configured": false, 00:14:50.748 "data_offset": 0, 00:14:50.748 "data_size": 0 00:14:50.748 }, 00:14:50.748 { 00:14:50.748 "name": "BaseBdev2", 00:14:50.748 "uuid": "f6f2aa53-03d8-476f-ba49-0751df6d005b", 00:14:50.748 "is_configured": true, 00:14:50.748 "data_offset": 2048, 00:14:50.748 "data_size": 63488 00:14:50.748 }, 00:14:50.748 { 00:14:50.748 "name": "BaseBdev3", 00:14:50.748 "uuid": "9ad6b85b-7abf-43b2-afd8-c9d8cef5e2df", 00:14:50.748 "is_configured": true, 00:14:50.748 "data_offset": 2048, 00:14:50.748 "data_size": 63488 00:14:50.748 }, 00:14:50.748 { 00:14:50.748 "name": "BaseBdev4", 00:14:50.748 "uuid": "7450adf6-748b-4984-aa64-eefed5134028", 00:14:50.748 "is_configured": true, 00:14:50.748 "data_offset": 2048, 00:14:50.748 "data_size": 63488 00:14:50.748 } 00:14:50.748 ] 00:14:50.748 }' 00:14:50.748 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.748 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.006 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:51.006 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.006 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.006 [2024-11-20 13:28:32.553815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.006 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.006 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.006 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.006 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.006 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.006 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.007 "name": "Existed_Raid", 00:14:51.007 "uuid": "831d854d-77cf-429c-a0b5-eafb1cbb5de5", 00:14:51.007 "strip_size_kb": 64, 00:14:51.007 "state": "configuring", 00:14:51.007 "raid_level": "raid5f", 00:14:51.007 "superblock": true, 00:14:51.007 "num_base_bdevs": 4, 00:14:51.007 "num_base_bdevs_discovered": 2, 00:14:51.007 "num_base_bdevs_operational": 4, 00:14:51.007 "base_bdevs_list": [ 00:14:51.007 { 00:14:51.007 "name": "BaseBdev1", 00:14:51.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.007 "is_configured": false, 00:14:51.007 "data_offset": 0, 00:14:51.007 "data_size": 0 00:14:51.007 }, 00:14:51.007 { 00:14:51.007 "name": null, 00:14:51.007 "uuid": "f6f2aa53-03d8-476f-ba49-0751df6d005b", 00:14:51.007 "is_configured": false, 00:14:51.007 "data_offset": 0, 00:14:51.007 "data_size": 63488 00:14:51.007 }, 00:14:51.007 { 00:14:51.007 "name": "BaseBdev3", 00:14:51.007 "uuid": "9ad6b85b-7abf-43b2-afd8-c9d8cef5e2df", 00:14:51.007 "is_configured": true, 00:14:51.007 "data_offset": 2048, 00:14:51.007 "data_size": 63488 00:14:51.007 }, 00:14:51.007 { 00:14:51.007 "name": "BaseBdev4", 00:14:51.007 "uuid": "7450adf6-748b-4984-aa64-eefed5134028", 00:14:51.007 "is_configured": true, 00:14:51.007 "data_offset": 2048, 00:14:51.007 "data_size": 63488 00:14:51.007 } 00:14:51.007 ] 00:14:51.007 }' 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.007 13:28:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.576 [2024-11-20 13:28:33.080746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.576 BaseBdev1 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.576 [ 00:14:51.576 { 00:14:51.576 "name": "BaseBdev1", 00:14:51.576 "aliases": [ 00:14:51.576 "e536e7d9-6c39-4930-865b-a30a43ce180f" 00:14:51.576 ], 00:14:51.576 "product_name": "Malloc disk", 00:14:51.576 "block_size": 512, 00:14:51.576 "num_blocks": 65536, 00:14:51.576 "uuid": "e536e7d9-6c39-4930-865b-a30a43ce180f", 00:14:51.576 "assigned_rate_limits": { 00:14:51.576 "rw_ios_per_sec": 0, 00:14:51.576 "rw_mbytes_per_sec": 0, 00:14:51.576 "r_mbytes_per_sec": 0, 00:14:51.576 "w_mbytes_per_sec": 0 00:14:51.576 }, 00:14:51.576 "claimed": true, 00:14:51.576 "claim_type": "exclusive_write", 00:14:51.576 "zoned": false, 00:14:51.576 "supported_io_types": { 00:14:51.576 "read": true, 00:14:51.576 "write": true, 00:14:51.576 "unmap": true, 00:14:51.576 "flush": true, 00:14:51.576 "reset": true, 00:14:51.576 "nvme_admin": false, 00:14:51.576 "nvme_io": false, 00:14:51.576 "nvme_io_md": false, 00:14:51.576 "write_zeroes": true, 00:14:51.576 "zcopy": true, 00:14:51.576 "get_zone_info": false, 00:14:51.576 "zone_management": false, 00:14:51.576 "zone_append": false, 00:14:51.576 "compare": false, 00:14:51.576 "compare_and_write": false, 00:14:51.576 "abort": true, 00:14:51.576 "seek_hole": false, 00:14:51.576 "seek_data": false, 00:14:51.576 "copy": true, 00:14:51.576 "nvme_iov_md": false 00:14:51.576 }, 00:14:51.576 "memory_domains": [ 00:14:51.576 { 00:14:51.576 "dma_device_id": "system", 00:14:51.576 "dma_device_type": 1 00:14:51.576 }, 00:14:51.576 { 00:14:51.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.576 "dma_device_type": 2 00:14:51.576 } 00:14:51.576 ], 00:14:51.576 "driver_specific": {} 00:14:51.576 } 00:14:51.576 ] 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.576 "name": "Existed_Raid", 00:14:51.576 "uuid": "831d854d-77cf-429c-a0b5-eafb1cbb5de5", 00:14:51.576 "strip_size_kb": 64, 00:14:51.576 "state": "configuring", 00:14:51.576 "raid_level": "raid5f", 00:14:51.576 "superblock": true, 00:14:51.576 "num_base_bdevs": 4, 00:14:51.576 "num_base_bdevs_discovered": 3, 00:14:51.576 "num_base_bdevs_operational": 4, 00:14:51.576 "base_bdevs_list": [ 00:14:51.576 { 00:14:51.576 "name": "BaseBdev1", 00:14:51.576 "uuid": "e536e7d9-6c39-4930-865b-a30a43ce180f", 00:14:51.576 "is_configured": true, 00:14:51.576 "data_offset": 2048, 00:14:51.576 "data_size": 63488 00:14:51.576 }, 00:14:51.576 { 00:14:51.576 "name": null, 00:14:51.576 "uuid": "f6f2aa53-03d8-476f-ba49-0751df6d005b", 00:14:51.576 "is_configured": false, 00:14:51.576 "data_offset": 0, 00:14:51.576 "data_size": 63488 00:14:51.576 }, 00:14:51.576 { 00:14:51.576 "name": "BaseBdev3", 00:14:51.576 "uuid": "9ad6b85b-7abf-43b2-afd8-c9d8cef5e2df", 00:14:51.576 "is_configured": true, 00:14:51.576 "data_offset": 2048, 00:14:51.576 "data_size": 63488 00:14:51.576 }, 00:14:51.576 { 00:14:51.576 "name": "BaseBdev4", 00:14:51.576 "uuid": "7450adf6-748b-4984-aa64-eefed5134028", 00:14:51.576 "is_configured": true, 00:14:51.576 "data_offset": 2048, 00:14:51.576 "data_size": 63488 00:14:51.576 } 00:14:51.576 ] 00:14:51.576 }' 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.576 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.146 [2024-11-20 13:28:33.663909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.146 "name": "Existed_Raid", 00:14:52.146 "uuid": "831d854d-77cf-429c-a0b5-eafb1cbb5de5", 00:14:52.146 "strip_size_kb": 64, 00:14:52.146 "state": "configuring", 00:14:52.146 "raid_level": "raid5f", 00:14:52.146 "superblock": true, 00:14:52.146 "num_base_bdevs": 4, 00:14:52.146 "num_base_bdevs_discovered": 2, 00:14:52.146 "num_base_bdevs_operational": 4, 00:14:52.146 "base_bdevs_list": [ 00:14:52.146 { 00:14:52.146 "name": "BaseBdev1", 00:14:52.146 "uuid": "e536e7d9-6c39-4930-865b-a30a43ce180f", 00:14:52.146 "is_configured": true, 00:14:52.146 "data_offset": 2048, 00:14:52.146 "data_size": 63488 00:14:52.146 }, 00:14:52.146 { 00:14:52.146 "name": null, 00:14:52.146 "uuid": "f6f2aa53-03d8-476f-ba49-0751df6d005b", 00:14:52.146 "is_configured": false, 00:14:52.146 "data_offset": 0, 00:14:52.146 "data_size": 63488 00:14:52.146 }, 00:14:52.146 { 00:14:52.146 "name": null, 00:14:52.146 "uuid": "9ad6b85b-7abf-43b2-afd8-c9d8cef5e2df", 00:14:52.146 "is_configured": false, 00:14:52.146 "data_offset": 0, 00:14:52.146 "data_size": 63488 00:14:52.146 }, 00:14:52.146 { 00:14:52.146 "name": "BaseBdev4", 00:14:52.146 "uuid": "7450adf6-748b-4984-aa64-eefed5134028", 00:14:52.146 "is_configured": true, 00:14:52.146 "data_offset": 2048, 00:14:52.146 "data_size": 63488 00:14:52.146 } 00:14:52.146 ] 00:14:52.146 }' 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.146 13:28:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.714 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.714 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.714 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.714 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:52.714 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.714 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:52.714 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:52.714 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.714 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.714 [2024-11-20 13:28:34.211726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.714 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.715 "name": "Existed_Raid", 00:14:52.715 "uuid": "831d854d-77cf-429c-a0b5-eafb1cbb5de5", 00:14:52.715 "strip_size_kb": 64, 00:14:52.715 "state": "configuring", 00:14:52.715 "raid_level": "raid5f", 00:14:52.715 "superblock": true, 00:14:52.715 "num_base_bdevs": 4, 00:14:52.715 "num_base_bdevs_discovered": 3, 00:14:52.715 "num_base_bdevs_operational": 4, 00:14:52.715 "base_bdevs_list": [ 00:14:52.715 { 00:14:52.715 "name": "BaseBdev1", 00:14:52.715 "uuid": "e536e7d9-6c39-4930-865b-a30a43ce180f", 00:14:52.715 "is_configured": true, 00:14:52.715 "data_offset": 2048, 00:14:52.715 "data_size": 63488 00:14:52.715 }, 00:14:52.715 { 00:14:52.715 "name": null, 00:14:52.715 "uuid": "f6f2aa53-03d8-476f-ba49-0751df6d005b", 00:14:52.715 "is_configured": false, 00:14:52.715 "data_offset": 0, 00:14:52.715 "data_size": 63488 00:14:52.715 }, 00:14:52.715 { 00:14:52.715 "name": "BaseBdev3", 00:14:52.715 "uuid": "9ad6b85b-7abf-43b2-afd8-c9d8cef5e2df", 00:14:52.715 "is_configured": true, 00:14:52.715 "data_offset": 2048, 00:14:52.715 "data_size": 63488 00:14:52.715 }, 00:14:52.715 { 00:14:52.715 "name": "BaseBdev4", 00:14:52.715 "uuid": "7450adf6-748b-4984-aa64-eefed5134028", 00:14:52.715 "is_configured": true, 00:14:52.715 "data_offset": 2048, 00:14:52.715 "data_size": 63488 00:14:52.715 } 00:14:52.715 ] 00:14:52.715 }' 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.715 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.285 [2024-11-20 13:28:34.731799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.285 "name": "Existed_Raid", 00:14:53.285 "uuid": "831d854d-77cf-429c-a0b5-eafb1cbb5de5", 00:14:53.285 "strip_size_kb": 64, 00:14:53.285 "state": "configuring", 00:14:53.285 "raid_level": "raid5f", 00:14:53.285 "superblock": true, 00:14:53.285 "num_base_bdevs": 4, 00:14:53.285 "num_base_bdevs_discovered": 2, 00:14:53.285 "num_base_bdevs_operational": 4, 00:14:53.285 "base_bdevs_list": [ 00:14:53.285 { 00:14:53.285 "name": null, 00:14:53.285 "uuid": "e536e7d9-6c39-4930-865b-a30a43ce180f", 00:14:53.285 "is_configured": false, 00:14:53.285 "data_offset": 0, 00:14:53.285 "data_size": 63488 00:14:53.285 }, 00:14:53.285 { 00:14:53.285 "name": null, 00:14:53.285 "uuid": "f6f2aa53-03d8-476f-ba49-0751df6d005b", 00:14:53.285 "is_configured": false, 00:14:53.285 "data_offset": 0, 00:14:53.285 "data_size": 63488 00:14:53.285 }, 00:14:53.285 { 00:14:53.285 "name": "BaseBdev3", 00:14:53.285 "uuid": "9ad6b85b-7abf-43b2-afd8-c9d8cef5e2df", 00:14:53.285 "is_configured": true, 00:14:53.285 "data_offset": 2048, 00:14:53.285 "data_size": 63488 00:14:53.285 }, 00:14:53.285 { 00:14:53.285 "name": "BaseBdev4", 00:14:53.285 "uuid": "7450adf6-748b-4984-aa64-eefed5134028", 00:14:53.285 "is_configured": true, 00:14:53.285 "data_offset": 2048, 00:14:53.285 "data_size": 63488 00:14:53.285 } 00:14:53.285 ] 00:14:53.285 }' 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.285 13:28:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.553 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.553 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.553 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.553 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.829 [2024-11-20 13:28:35.258247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.829 "name": "Existed_Raid", 00:14:53.829 "uuid": "831d854d-77cf-429c-a0b5-eafb1cbb5de5", 00:14:53.829 "strip_size_kb": 64, 00:14:53.829 "state": "configuring", 00:14:53.829 "raid_level": "raid5f", 00:14:53.829 "superblock": true, 00:14:53.829 "num_base_bdevs": 4, 00:14:53.829 "num_base_bdevs_discovered": 3, 00:14:53.829 "num_base_bdevs_operational": 4, 00:14:53.829 "base_bdevs_list": [ 00:14:53.829 { 00:14:53.829 "name": null, 00:14:53.829 "uuid": "e536e7d9-6c39-4930-865b-a30a43ce180f", 00:14:53.829 "is_configured": false, 00:14:53.829 "data_offset": 0, 00:14:53.829 "data_size": 63488 00:14:53.829 }, 00:14:53.829 { 00:14:53.829 "name": "BaseBdev2", 00:14:53.829 "uuid": "f6f2aa53-03d8-476f-ba49-0751df6d005b", 00:14:53.829 "is_configured": true, 00:14:53.829 "data_offset": 2048, 00:14:53.829 "data_size": 63488 00:14:53.829 }, 00:14:53.829 { 00:14:53.829 "name": "BaseBdev3", 00:14:53.829 "uuid": "9ad6b85b-7abf-43b2-afd8-c9d8cef5e2df", 00:14:53.829 "is_configured": true, 00:14:53.829 "data_offset": 2048, 00:14:53.829 "data_size": 63488 00:14:53.829 }, 00:14:53.829 { 00:14:53.829 "name": "BaseBdev4", 00:14:53.829 "uuid": "7450adf6-748b-4984-aa64-eefed5134028", 00:14:53.829 "is_configured": true, 00:14:53.829 "data_offset": 2048, 00:14:53.829 "data_size": 63488 00:14:53.829 } 00:14:53.829 ] 00:14:53.829 }' 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.829 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.090 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:54.090 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.090 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.090 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.090 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.090 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:54.090 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:54.090 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.090 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.090 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e536e7d9-6c39-4930-865b-a30a43ce180f 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.351 [2024-11-20 13:28:35.792990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:54.351 [2024-11-20 13:28:35.793364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:54.351 [2024-11-20 13:28:35.793428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:54.351 NewBaseBdev 00:14:54.351 [2024-11-20 13:28:35.793775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:54.351 [2024-11-20 13:28:35.794328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:54.351 [2024-11-20 13:28:35.794405] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:54.351 [2024-11-20 13:28:35.794536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.351 [ 00:14:54.351 { 00:14:54.351 "name": "NewBaseBdev", 00:14:54.351 "aliases": [ 00:14:54.351 "e536e7d9-6c39-4930-865b-a30a43ce180f" 00:14:54.351 ], 00:14:54.351 "product_name": "Malloc disk", 00:14:54.351 "block_size": 512, 00:14:54.351 "num_blocks": 65536, 00:14:54.351 "uuid": "e536e7d9-6c39-4930-865b-a30a43ce180f", 00:14:54.351 "assigned_rate_limits": { 00:14:54.351 "rw_ios_per_sec": 0, 00:14:54.351 "rw_mbytes_per_sec": 0, 00:14:54.351 "r_mbytes_per_sec": 0, 00:14:54.351 "w_mbytes_per_sec": 0 00:14:54.351 }, 00:14:54.351 "claimed": true, 00:14:54.351 "claim_type": "exclusive_write", 00:14:54.351 "zoned": false, 00:14:54.351 "supported_io_types": { 00:14:54.351 "read": true, 00:14:54.351 "write": true, 00:14:54.351 "unmap": true, 00:14:54.351 "flush": true, 00:14:54.351 "reset": true, 00:14:54.351 "nvme_admin": false, 00:14:54.351 "nvme_io": false, 00:14:54.351 "nvme_io_md": false, 00:14:54.351 "write_zeroes": true, 00:14:54.351 "zcopy": true, 00:14:54.351 "get_zone_info": false, 00:14:54.351 "zone_management": false, 00:14:54.351 "zone_append": false, 00:14:54.351 "compare": false, 00:14:54.351 "compare_and_write": false, 00:14:54.351 "abort": true, 00:14:54.351 "seek_hole": false, 00:14:54.351 "seek_data": false, 00:14:54.351 "copy": true, 00:14:54.351 "nvme_iov_md": false 00:14:54.351 }, 00:14:54.351 "memory_domains": [ 00:14:54.351 { 00:14:54.351 "dma_device_id": "system", 00:14:54.351 "dma_device_type": 1 00:14:54.351 }, 00:14:54.351 { 00:14:54.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.351 "dma_device_type": 2 00:14:54.351 } 00:14:54.351 ], 00:14:54.351 "driver_specific": {} 00:14:54.351 } 00:14:54.351 ] 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.351 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.351 "name": "Existed_Raid", 00:14:54.351 "uuid": "831d854d-77cf-429c-a0b5-eafb1cbb5de5", 00:14:54.351 "strip_size_kb": 64, 00:14:54.351 "state": "online", 00:14:54.351 "raid_level": "raid5f", 00:14:54.351 "superblock": true, 00:14:54.351 "num_base_bdevs": 4, 00:14:54.351 "num_base_bdevs_discovered": 4, 00:14:54.351 "num_base_bdevs_operational": 4, 00:14:54.351 "base_bdevs_list": [ 00:14:54.351 { 00:14:54.351 "name": "NewBaseBdev", 00:14:54.351 "uuid": "e536e7d9-6c39-4930-865b-a30a43ce180f", 00:14:54.351 "is_configured": true, 00:14:54.351 "data_offset": 2048, 00:14:54.351 "data_size": 63488 00:14:54.351 }, 00:14:54.351 { 00:14:54.351 "name": "BaseBdev2", 00:14:54.351 "uuid": "f6f2aa53-03d8-476f-ba49-0751df6d005b", 00:14:54.351 "is_configured": true, 00:14:54.351 "data_offset": 2048, 00:14:54.351 "data_size": 63488 00:14:54.351 }, 00:14:54.351 { 00:14:54.351 "name": "BaseBdev3", 00:14:54.351 "uuid": "9ad6b85b-7abf-43b2-afd8-c9d8cef5e2df", 00:14:54.351 "is_configured": true, 00:14:54.351 "data_offset": 2048, 00:14:54.351 "data_size": 63488 00:14:54.351 }, 00:14:54.351 { 00:14:54.351 "name": "BaseBdev4", 00:14:54.352 "uuid": "7450adf6-748b-4984-aa64-eefed5134028", 00:14:54.352 "is_configured": true, 00:14:54.352 "data_offset": 2048, 00:14:54.352 "data_size": 63488 00:14:54.352 } 00:14:54.352 ] 00:14:54.352 }' 00:14:54.352 13:28:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.352 13:28:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.919 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:54.919 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.920 [2024-11-20 13:28:36.296475] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:54.920 "name": "Existed_Raid", 00:14:54.920 "aliases": [ 00:14:54.920 "831d854d-77cf-429c-a0b5-eafb1cbb5de5" 00:14:54.920 ], 00:14:54.920 "product_name": "Raid Volume", 00:14:54.920 "block_size": 512, 00:14:54.920 "num_blocks": 190464, 00:14:54.920 "uuid": "831d854d-77cf-429c-a0b5-eafb1cbb5de5", 00:14:54.920 "assigned_rate_limits": { 00:14:54.920 "rw_ios_per_sec": 0, 00:14:54.920 "rw_mbytes_per_sec": 0, 00:14:54.920 "r_mbytes_per_sec": 0, 00:14:54.920 "w_mbytes_per_sec": 0 00:14:54.920 }, 00:14:54.920 "claimed": false, 00:14:54.920 "zoned": false, 00:14:54.920 "supported_io_types": { 00:14:54.920 "read": true, 00:14:54.920 "write": true, 00:14:54.920 "unmap": false, 00:14:54.920 "flush": false, 00:14:54.920 "reset": true, 00:14:54.920 "nvme_admin": false, 00:14:54.920 "nvme_io": false, 00:14:54.920 "nvme_io_md": false, 00:14:54.920 "write_zeroes": true, 00:14:54.920 "zcopy": false, 00:14:54.920 "get_zone_info": false, 00:14:54.920 "zone_management": false, 00:14:54.920 "zone_append": false, 00:14:54.920 "compare": false, 00:14:54.920 "compare_and_write": false, 00:14:54.920 "abort": false, 00:14:54.920 "seek_hole": false, 00:14:54.920 "seek_data": false, 00:14:54.920 "copy": false, 00:14:54.920 "nvme_iov_md": false 00:14:54.920 }, 00:14:54.920 "driver_specific": { 00:14:54.920 "raid": { 00:14:54.920 "uuid": "831d854d-77cf-429c-a0b5-eafb1cbb5de5", 00:14:54.920 "strip_size_kb": 64, 00:14:54.920 "state": "online", 00:14:54.920 "raid_level": "raid5f", 00:14:54.920 "superblock": true, 00:14:54.920 "num_base_bdevs": 4, 00:14:54.920 "num_base_bdevs_discovered": 4, 00:14:54.920 "num_base_bdevs_operational": 4, 00:14:54.920 "base_bdevs_list": [ 00:14:54.920 { 00:14:54.920 "name": "NewBaseBdev", 00:14:54.920 "uuid": "e536e7d9-6c39-4930-865b-a30a43ce180f", 00:14:54.920 "is_configured": true, 00:14:54.920 "data_offset": 2048, 00:14:54.920 "data_size": 63488 00:14:54.920 }, 00:14:54.920 { 00:14:54.920 "name": "BaseBdev2", 00:14:54.920 "uuid": "f6f2aa53-03d8-476f-ba49-0751df6d005b", 00:14:54.920 "is_configured": true, 00:14:54.920 "data_offset": 2048, 00:14:54.920 "data_size": 63488 00:14:54.920 }, 00:14:54.920 { 00:14:54.920 "name": "BaseBdev3", 00:14:54.920 "uuid": "9ad6b85b-7abf-43b2-afd8-c9d8cef5e2df", 00:14:54.920 "is_configured": true, 00:14:54.920 "data_offset": 2048, 00:14:54.920 "data_size": 63488 00:14:54.920 }, 00:14:54.920 { 00:14:54.920 "name": "BaseBdev4", 00:14:54.920 "uuid": "7450adf6-748b-4984-aa64-eefed5134028", 00:14:54.920 "is_configured": true, 00:14:54.920 "data_offset": 2048, 00:14:54.920 "data_size": 63488 00:14:54.920 } 00:14:54.920 ] 00:14:54.920 } 00:14:54.920 } 00:14:54.920 }' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:54.920 BaseBdev2 00:14:54.920 BaseBdev3 00:14:54.920 BaseBdev4' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.920 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.180 [2024-11-20 13:28:36.623747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.180 [2024-11-20 13:28:36.623845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.180 [2024-11-20 13:28:36.623972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.180 [2024-11-20 13:28:36.624309] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.180 [2024-11-20 13:28:36.624380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93649 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 93649 ']' 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 93649 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93649 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93649' 00:14:55.180 killing process with pid 93649 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 93649 00:14:55.180 [2024-11-20 13:28:36.673680] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.180 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 93649 00:14:55.180 [2024-11-20 13:28:36.715073] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.439 13:28:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:55.439 ************************************ 00:14:55.439 END TEST raid5f_state_function_test_sb 00:14:55.439 ************************************ 00:14:55.439 00:14:55.439 real 0m10.054s 00:14:55.439 user 0m17.249s 00:14:55.439 sys 0m1.995s 00:14:55.439 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.439 13:28:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.439 13:28:36 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:55.439 13:28:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:55.439 13:28:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.439 13:28:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.439 ************************************ 00:14:55.439 START TEST raid5f_superblock_test 00:14:55.439 ************************************ 00:14:55.439 13:28:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:14:55.439 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:55.439 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:55.439 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:55.439 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:55.439 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:55.439 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:55.439 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:55.440 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:55.440 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:55.440 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:55.440 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:55.440 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:55.440 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:55.440 13:28:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:55.440 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:55.440 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:55.440 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94303 00:14:55.440 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:55.440 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94303 00:14:55.440 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 94303 ']' 00:14:55.440 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.440 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.440 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.440 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.440 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.440 [2024-11-20 13:28:37.087905] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:55.440 [2024-11-20 13:28:37.088069] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94303 ] 00:14:55.698 [2024-11-20 13:28:37.243289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.698 [2024-11-20 13:28:37.274402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.698 [2024-11-20 13:28:37.320669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.698 [2024-11-20 13:28:37.320780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.267 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.267 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:56.267 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:56.267 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.267 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:56.267 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:56.527 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:56.527 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.527 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.527 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.527 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:56.527 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.527 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.527 malloc1 00:14:56.527 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.528 [2024-11-20 13:28:37.960854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:56.528 [2024-11-20 13:28:37.960983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.528 [2024-11-20 13:28:37.961037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:56.528 [2024-11-20 13:28:37.961084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.528 [2024-11-20 13:28:37.963327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.528 [2024-11-20 13:28:37.963417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:56.528 pt1 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.528 malloc2 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.528 [2024-11-20 13:28:37.993705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:56.528 [2024-11-20 13:28:37.993813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.528 [2024-11-20 13:28:37.993836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:56.528 [2024-11-20 13:28:37.993849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.528 [2024-11-20 13:28:37.996160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.528 [2024-11-20 13:28:37.996205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:56.528 pt2 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.528 13:28:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.528 malloc3 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.528 [2024-11-20 13:28:38.022694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:56.528 [2024-11-20 13:28:38.022828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.528 [2024-11-20 13:28:38.022873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:56.528 [2024-11-20 13:28:38.022917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.528 [2024-11-20 13:28:38.025272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.528 [2024-11-20 13:28:38.025374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:56.528 pt3 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.528 malloc4 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.528 [2024-11-20 13:28:38.066356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:56.528 [2024-11-20 13:28:38.066477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.528 [2024-11-20 13:28:38.066516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:56.528 [2024-11-20 13:28:38.066565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.528 [2024-11-20 13:28:38.068823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.528 [2024-11-20 13:28:38.068924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:56.528 pt4 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.528 [2024-11-20 13:28:38.078359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:56.528 [2024-11-20 13:28:38.080306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.528 [2024-11-20 13:28:38.080442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:56.528 [2024-11-20 13:28:38.080537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:56.528 [2024-11-20 13:28:38.080758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:56.528 [2024-11-20 13:28:38.080814] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:56.528 [2024-11-20 13:28:38.081113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:56.528 [2024-11-20 13:28:38.081700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:56.528 [2024-11-20 13:28:38.081767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:56.528 [2024-11-20 13:28:38.082005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.528 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.528 "name": "raid_bdev1", 00:14:56.529 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:14:56.529 "strip_size_kb": 64, 00:14:56.529 "state": "online", 00:14:56.529 "raid_level": "raid5f", 00:14:56.529 "superblock": true, 00:14:56.529 "num_base_bdevs": 4, 00:14:56.529 "num_base_bdevs_discovered": 4, 00:14:56.529 "num_base_bdevs_operational": 4, 00:14:56.529 "base_bdevs_list": [ 00:14:56.529 { 00:14:56.529 "name": "pt1", 00:14:56.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.529 "is_configured": true, 00:14:56.529 "data_offset": 2048, 00:14:56.529 "data_size": 63488 00:14:56.529 }, 00:14:56.529 { 00:14:56.529 "name": "pt2", 00:14:56.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.529 "is_configured": true, 00:14:56.529 "data_offset": 2048, 00:14:56.529 "data_size": 63488 00:14:56.529 }, 00:14:56.529 { 00:14:56.529 "name": "pt3", 00:14:56.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.529 "is_configured": true, 00:14:56.529 "data_offset": 2048, 00:14:56.529 "data_size": 63488 00:14:56.529 }, 00:14:56.529 { 00:14:56.529 "name": "pt4", 00:14:56.529 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:56.529 "is_configured": true, 00:14:56.529 "data_offset": 2048, 00:14:56.529 "data_size": 63488 00:14:56.529 } 00:14:56.529 ] 00:14:56.529 }' 00:14:56.529 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.529 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.097 [2024-11-20 13:28:38.523436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.097 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.097 "name": "raid_bdev1", 00:14:57.097 "aliases": [ 00:14:57.097 "c28cefb6-4dce-4ec7-94a0-759e3252b0cf" 00:14:57.097 ], 00:14:57.097 "product_name": "Raid Volume", 00:14:57.097 "block_size": 512, 00:14:57.097 "num_blocks": 190464, 00:14:57.097 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:14:57.097 "assigned_rate_limits": { 00:14:57.097 "rw_ios_per_sec": 0, 00:14:57.097 "rw_mbytes_per_sec": 0, 00:14:57.097 "r_mbytes_per_sec": 0, 00:14:57.097 "w_mbytes_per_sec": 0 00:14:57.097 }, 00:14:57.097 "claimed": false, 00:14:57.097 "zoned": false, 00:14:57.097 "supported_io_types": { 00:14:57.097 "read": true, 00:14:57.098 "write": true, 00:14:57.098 "unmap": false, 00:14:57.098 "flush": false, 00:14:57.098 "reset": true, 00:14:57.098 "nvme_admin": false, 00:14:57.098 "nvme_io": false, 00:14:57.098 "nvme_io_md": false, 00:14:57.098 "write_zeroes": true, 00:14:57.098 "zcopy": false, 00:14:57.098 "get_zone_info": false, 00:14:57.098 "zone_management": false, 00:14:57.098 "zone_append": false, 00:14:57.098 "compare": false, 00:14:57.098 "compare_and_write": false, 00:14:57.098 "abort": false, 00:14:57.098 "seek_hole": false, 00:14:57.098 "seek_data": false, 00:14:57.098 "copy": false, 00:14:57.098 "nvme_iov_md": false 00:14:57.098 }, 00:14:57.098 "driver_specific": { 00:14:57.098 "raid": { 00:14:57.098 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:14:57.098 "strip_size_kb": 64, 00:14:57.098 "state": "online", 00:14:57.098 "raid_level": "raid5f", 00:14:57.098 "superblock": true, 00:14:57.098 "num_base_bdevs": 4, 00:14:57.098 "num_base_bdevs_discovered": 4, 00:14:57.098 "num_base_bdevs_operational": 4, 00:14:57.098 "base_bdevs_list": [ 00:14:57.098 { 00:14:57.098 "name": "pt1", 00:14:57.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.098 "is_configured": true, 00:14:57.098 "data_offset": 2048, 00:14:57.098 "data_size": 63488 00:14:57.098 }, 00:14:57.098 { 00:14:57.098 "name": "pt2", 00:14:57.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.098 "is_configured": true, 00:14:57.098 "data_offset": 2048, 00:14:57.098 "data_size": 63488 00:14:57.098 }, 00:14:57.098 { 00:14:57.098 "name": "pt3", 00:14:57.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.098 "is_configured": true, 00:14:57.098 "data_offset": 2048, 00:14:57.098 "data_size": 63488 00:14:57.098 }, 00:14:57.098 { 00:14:57.098 "name": "pt4", 00:14:57.098 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.098 "is_configured": true, 00:14:57.098 "data_offset": 2048, 00:14:57.098 "data_size": 63488 00:14:57.098 } 00:14:57.098 ] 00:14:57.098 } 00:14:57.098 } 00:14:57.098 }' 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:57.098 pt2 00:14:57.098 pt3 00:14:57.098 pt4' 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.098 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.358 [2024-11-20 13:28:38.886820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c28cefb6-4dce-4ec7-94a0-759e3252b0cf 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c28cefb6-4dce-4ec7-94a0-759e3252b0cf ']' 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.358 [2024-11-20 13:28:38.930520] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.358 [2024-11-20 13:28:38.930608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.358 [2024-11-20 13:28:38.930741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.358 [2024-11-20 13:28:38.930877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.358 [2024-11-20 13:28:38.930932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.358 13:28:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:57.358 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:57.359 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.359 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.618 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.618 [2024-11-20 13:28:39.066342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:57.618 [2024-11-20 13:28:39.068617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:57.618 [2024-11-20 13:28:39.068755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:57.618 [2024-11-20 13:28:39.068822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:57.618 [2024-11-20 13:28:39.068938] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:57.618 [2024-11-20 13:28:39.069060] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:57.618 [2024-11-20 13:28:39.069095] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:57.618 [2024-11-20 13:28:39.069118] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:57.618 [2024-11-20 13:28:39.069139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.619 [2024-11-20 13:28:39.069163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:57.619 request: 00:14:57.619 { 00:14:57.619 "name": "raid_bdev1", 00:14:57.619 "raid_level": "raid5f", 00:14:57.619 "base_bdevs": [ 00:14:57.619 "malloc1", 00:14:57.619 "malloc2", 00:14:57.619 "malloc3", 00:14:57.619 "malloc4" 00:14:57.619 ], 00:14:57.619 "strip_size_kb": 64, 00:14:57.619 "superblock": false, 00:14:57.619 "method": "bdev_raid_create", 00:14:57.619 "req_id": 1 00:14:57.619 } 00:14:57.619 Got JSON-RPC error response 00:14:57.619 response: 00:14:57.619 { 00:14:57.619 "code": -17, 00:14:57.619 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:57.619 } 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.619 [2024-11-20 13:28:39.130191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:57.619 [2024-11-20 13:28:39.130344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.619 [2024-11-20 13:28:39.130396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:57.619 [2024-11-20 13:28:39.130436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.619 [2024-11-20 13:28:39.132975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.619 [2024-11-20 13:28:39.133080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:57.619 [2024-11-20 13:28:39.133222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:57.619 [2024-11-20 13:28:39.133306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:57.619 pt1 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.619 "name": "raid_bdev1", 00:14:57.619 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:14:57.619 "strip_size_kb": 64, 00:14:57.619 "state": "configuring", 00:14:57.619 "raid_level": "raid5f", 00:14:57.619 "superblock": true, 00:14:57.619 "num_base_bdevs": 4, 00:14:57.619 "num_base_bdevs_discovered": 1, 00:14:57.619 "num_base_bdevs_operational": 4, 00:14:57.619 "base_bdevs_list": [ 00:14:57.619 { 00:14:57.619 "name": "pt1", 00:14:57.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.619 "is_configured": true, 00:14:57.619 "data_offset": 2048, 00:14:57.619 "data_size": 63488 00:14:57.619 }, 00:14:57.619 { 00:14:57.619 "name": null, 00:14:57.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.619 "is_configured": false, 00:14:57.619 "data_offset": 2048, 00:14:57.619 "data_size": 63488 00:14:57.619 }, 00:14:57.619 { 00:14:57.619 "name": null, 00:14:57.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.619 "is_configured": false, 00:14:57.619 "data_offset": 2048, 00:14:57.619 "data_size": 63488 00:14:57.619 }, 00:14:57.619 { 00:14:57.619 "name": null, 00:14:57.619 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.619 "is_configured": false, 00:14:57.619 "data_offset": 2048, 00:14:57.619 "data_size": 63488 00:14:57.619 } 00:14:57.619 ] 00:14:57.619 }' 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.619 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.188 [2024-11-20 13:28:39.629366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.188 [2024-11-20 13:28:39.629491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.188 [2024-11-20 13:28:39.629555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:58.188 [2024-11-20 13:28:39.629595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.188 [2024-11-20 13:28:39.630111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.188 [2024-11-20 13:28:39.630150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.188 [2024-11-20 13:28:39.630248] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:58.188 [2024-11-20 13:28:39.630273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.188 pt2 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.188 [2024-11-20 13:28:39.637368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.188 "name": "raid_bdev1", 00:14:58.188 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:14:58.188 "strip_size_kb": 64, 00:14:58.188 "state": "configuring", 00:14:58.188 "raid_level": "raid5f", 00:14:58.188 "superblock": true, 00:14:58.188 "num_base_bdevs": 4, 00:14:58.188 "num_base_bdevs_discovered": 1, 00:14:58.188 "num_base_bdevs_operational": 4, 00:14:58.188 "base_bdevs_list": [ 00:14:58.188 { 00:14:58.188 "name": "pt1", 00:14:58.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:58.188 "is_configured": true, 00:14:58.188 "data_offset": 2048, 00:14:58.188 "data_size": 63488 00:14:58.188 }, 00:14:58.188 { 00:14:58.188 "name": null, 00:14:58.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.188 "is_configured": false, 00:14:58.188 "data_offset": 0, 00:14:58.188 "data_size": 63488 00:14:58.188 }, 00:14:58.188 { 00:14:58.188 "name": null, 00:14:58.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.188 "is_configured": false, 00:14:58.188 "data_offset": 2048, 00:14:58.188 "data_size": 63488 00:14:58.188 }, 00:14:58.188 { 00:14:58.188 "name": null, 00:14:58.188 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:58.188 "is_configured": false, 00:14:58.188 "data_offset": 2048, 00:14:58.188 "data_size": 63488 00:14:58.188 } 00:14:58.188 ] 00:14:58.188 }' 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.188 13:28:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.448 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:58.448 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:58.448 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.448 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.448 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.448 [2024-11-20 13:28:40.092612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.448 [2024-11-20 13:28:40.092781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.448 [2024-11-20 13:28:40.092838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:58.448 [2024-11-20 13:28:40.092879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.448 [2024-11-20 13:28:40.093365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.448 [2024-11-20 13:28:40.093439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.448 [2024-11-20 13:28:40.093558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:58.448 [2024-11-20 13:28:40.093618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.448 pt2 00:14:58.448 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.448 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:58.448 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:58.448 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:58.448 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.448 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.448 [2024-11-20 13:28:40.104532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:58.448 [2024-11-20 13:28:40.104654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.448 [2024-11-20 13:28:40.104699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:58.448 [2024-11-20 13:28:40.104755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.448 [2024-11-20 13:28:40.105226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.449 [2024-11-20 13:28:40.105293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:58.449 [2024-11-20 13:28:40.105401] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:58.449 [2024-11-20 13:28:40.105459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:58.449 pt3 00:14:58.449 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.449 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:58.449 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:58.449 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:58.449 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.449 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.709 [2024-11-20 13:28:40.116502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:58.709 [2024-11-20 13:28:40.116612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.709 [2024-11-20 13:28:40.116669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:58.709 [2024-11-20 13:28:40.116734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.709 [2024-11-20 13:28:40.117116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.709 [2024-11-20 13:28:40.117183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:58.709 [2024-11-20 13:28:40.117282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:58.709 [2024-11-20 13:28:40.117340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:58.709 [2024-11-20 13:28:40.117486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:58.709 [2024-11-20 13:28:40.117533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:58.709 [2024-11-20 13:28:40.117818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:58.709 [2024-11-20 13:28:40.118379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:58.709 [2024-11-20 13:28:40.118437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:58.709 [2024-11-20 13:28:40.118602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.709 pt4 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.709 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.709 "name": "raid_bdev1", 00:14:58.709 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:14:58.709 "strip_size_kb": 64, 00:14:58.709 "state": "online", 00:14:58.709 "raid_level": "raid5f", 00:14:58.709 "superblock": true, 00:14:58.709 "num_base_bdevs": 4, 00:14:58.709 "num_base_bdevs_discovered": 4, 00:14:58.709 "num_base_bdevs_operational": 4, 00:14:58.709 "base_bdevs_list": [ 00:14:58.709 { 00:14:58.709 "name": "pt1", 00:14:58.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:58.709 "is_configured": true, 00:14:58.709 "data_offset": 2048, 00:14:58.709 "data_size": 63488 00:14:58.709 }, 00:14:58.709 { 00:14:58.709 "name": "pt2", 00:14:58.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.709 "is_configured": true, 00:14:58.709 "data_offset": 2048, 00:14:58.709 "data_size": 63488 00:14:58.709 }, 00:14:58.709 { 00:14:58.709 "name": "pt3", 00:14:58.709 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.709 "is_configured": true, 00:14:58.709 "data_offset": 2048, 00:14:58.709 "data_size": 63488 00:14:58.709 }, 00:14:58.709 { 00:14:58.709 "name": "pt4", 00:14:58.709 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:58.709 "is_configured": true, 00:14:58.709 "data_offset": 2048, 00:14:58.709 "data_size": 63488 00:14:58.709 } 00:14:58.709 ] 00:14:58.709 }' 00:14:58.710 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.710 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.970 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:58.970 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:58.970 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:58.970 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:58.970 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:58.970 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:58.970 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.970 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.970 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.970 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:58.970 [2024-11-20 13:28:40.603967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.970 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.230 "name": "raid_bdev1", 00:14:59.230 "aliases": [ 00:14:59.230 "c28cefb6-4dce-4ec7-94a0-759e3252b0cf" 00:14:59.230 ], 00:14:59.230 "product_name": "Raid Volume", 00:14:59.230 "block_size": 512, 00:14:59.230 "num_blocks": 190464, 00:14:59.230 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:14:59.230 "assigned_rate_limits": { 00:14:59.230 "rw_ios_per_sec": 0, 00:14:59.230 "rw_mbytes_per_sec": 0, 00:14:59.230 "r_mbytes_per_sec": 0, 00:14:59.230 "w_mbytes_per_sec": 0 00:14:59.230 }, 00:14:59.230 "claimed": false, 00:14:59.230 "zoned": false, 00:14:59.230 "supported_io_types": { 00:14:59.230 "read": true, 00:14:59.230 "write": true, 00:14:59.230 "unmap": false, 00:14:59.230 "flush": false, 00:14:59.230 "reset": true, 00:14:59.230 "nvme_admin": false, 00:14:59.230 "nvme_io": false, 00:14:59.230 "nvme_io_md": false, 00:14:59.230 "write_zeroes": true, 00:14:59.230 "zcopy": false, 00:14:59.230 "get_zone_info": false, 00:14:59.230 "zone_management": false, 00:14:59.230 "zone_append": false, 00:14:59.230 "compare": false, 00:14:59.230 "compare_and_write": false, 00:14:59.230 "abort": false, 00:14:59.230 "seek_hole": false, 00:14:59.230 "seek_data": false, 00:14:59.230 "copy": false, 00:14:59.230 "nvme_iov_md": false 00:14:59.230 }, 00:14:59.230 "driver_specific": { 00:14:59.230 "raid": { 00:14:59.230 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:14:59.230 "strip_size_kb": 64, 00:14:59.230 "state": "online", 00:14:59.230 "raid_level": "raid5f", 00:14:59.230 "superblock": true, 00:14:59.230 "num_base_bdevs": 4, 00:14:59.230 "num_base_bdevs_discovered": 4, 00:14:59.230 "num_base_bdevs_operational": 4, 00:14:59.230 "base_bdevs_list": [ 00:14:59.230 { 00:14:59.230 "name": "pt1", 00:14:59.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.230 "is_configured": true, 00:14:59.230 "data_offset": 2048, 00:14:59.230 "data_size": 63488 00:14:59.230 }, 00:14:59.230 { 00:14:59.230 "name": "pt2", 00:14:59.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.230 "is_configured": true, 00:14:59.230 "data_offset": 2048, 00:14:59.230 "data_size": 63488 00:14:59.230 }, 00:14:59.230 { 00:14:59.230 "name": "pt3", 00:14:59.230 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.230 "is_configured": true, 00:14:59.230 "data_offset": 2048, 00:14:59.230 "data_size": 63488 00:14:59.230 }, 00:14:59.230 { 00:14:59.230 "name": "pt4", 00:14:59.230 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:59.230 "is_configured": true, 00:14:59.230 "data_offset": 2048, 00:14:59.230 "data_size": 63488 00:14:59.230 } 00:14:59.230 ] 00:14:59.230 } 00:14:59.230 } 00:14:59.230 }' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:59.230 pt2 00:14:59.230 pt3 00:14:59.230 pt4' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.230 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.490 [2024-11-20 13:28:40.939395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c28cefb6-4dce-4ec7-94a0-759e3252b0cf '!=' c28cefb6-4dce-4ec7-94a0-759e3252b0cf ']' 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.490 [2024-11-20 13:28:40.971213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.490 13:28:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.490 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.490 "name": "raid_bdev1", 00:14:59.490 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:14:59.490 "strip_size_kb": 64, 00:14:59.490 "state": "online", 00:14:59.490 "raid_level": "raid5f", 00:14:59.490 "superblock": true, 00:14:59.490 "num_base_bdevs": 4, 00:14:59.490 "num_base_bdevs_discovered": 3, 00:14:59.490 "num_base_bdevs_operational": 3, 00:14:59.490 "base_bdevs_list": [ 00:14:59.490 { 00:14:59.490 "name": null, 00:14:59.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.490 "is_configured": false, 00:14:59.490 "data_offset": 0, 00:14:59.490 "data_size": 63488 00:14:59.490 }, 00:14:59.490 { 00:14:59.490 "name": "pt2", 00:14:59.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.490 "is_configured": true, 00:14:59.490 "data_offset": 2048, 00:14:59.490 "data_size": 63488 00:14:59.490 }, 00:14:59.490 { 00:14:59.490 "name": "pt3", 00:14:59.490 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.490 "is_configured": true, 00:14:59.490 "data_offset": 2048, 00:14:59.490 "data_size": 63488 00:14:59.490 }, 00:14:59.490 { 00:14:59.490 "name": "pt4", 00:14:59.490 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:59.490 "is_configured": true, 00:14:59.490 "data_offset": 2048, 00:14:59.490 "data_size": 63488 00:14:59.490 } 00:14:59.490 ] 00:14:59.490 }' 00:14:59.490 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.490 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.060 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.061 [2024-11-20 13:28:41.478278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.061 [2024-11-20 13:28:41.478382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.061 [2024-11-20 13:28:41.478523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.061 [2024-11-20 13:28:41.478626] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.061 [2024-11-20 13:28:41.478691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.061 [2024-11-20 13:28:41.546142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:00.061 [2024-11-20 13:28:41.546261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.061 [2024-11-20 13:28:41.546318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:00.061 [2024-11-20 13:28:41.546364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.061 [2024-11-20 13:28:41.548626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.061 [2024-11-20 13:28:41.548718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:00.061 [2024-11-20 13:28:41.548828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:00.061 [2024-11-20 13:28:41.548912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:00.061 pt2 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.061 "name": "raid_bdev1", 00:15:00.061 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:15:00.061 "strip_size_kb": 64, 00:15:00.061 "state": "configuring", 00:15:00.061 "raid_level": "raid5f", 00:15:00.061 "superblock": true, 00:15:00.061 "num_base_bdevs": 4, 00:15:00.061 "num_base_bdevs_discovered": 1, 00:15:00.061 "num_base_bdevs_operational": 3, 00:15:00.061 "base_bdevs_list": [ 00:15:00.061 { 00:15:00.061 "name": null, 00:15:00.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.061 "is_configured": false, 00:15:00.061 "data_offset": 2048, 00:15:00.061 "data_size": 63488 00:15:00.061 }, 00:15:00.061 { 00:15:00.061 "name": "pt2", 00:15:00.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.061 "is_configured": true, 00:15:00.061 "data_offset": 2048, 00:15:00.061 "data_size": 63488 00:15:00.061 }, 00:15:00.061 { 00:15:00.061 "name": null, 00:15:00.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.061 "is_configured": false, 00:15:00.061 "data_offset": 2048, 00:15:00.061 "data_size": 63488 00:15:00.061 }, 00:15:00.061 { 00:15:00.061 "name": null, 00:15:00.061 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.061 "is_configured": false, 00:15:00.061 "data_offset": 2048, 00:15:00.061 "data_size": 63488 00:15:00.061 } 00:15:00.061 ] 00:15:00.061 }' 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.061 13:28:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.642 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:00.642 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:00.642 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:00.642 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.642 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.642 [2024-11-20 13:28:42.013427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:00.643 [2024-11-20 13:28:42.013596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.643 [2024-11-20 13:28:42.013643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:00.643 [2024-11-20 13:28:42.013686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.643 [2024-11-20 13:28:42.014149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.643 [2024-11-20 13:28:42.014222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:00.643 [2024-11-20 13:28:42.014347] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:00.643 [2024-11-20 13:28:42.014411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:00.643 pt3 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.643 "name": "raid_bdev1", 00:15:00.643 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:15:00.643 "strip_size_kb": 64, 00:15:00.643 "state": "configuring", 00:15:00.643 "raid_level": "raid5f", 00:15:00.643 "superblock": true, 00:15:00.643 "num_base_bdevs": 4, 00:15:00.643 "num_base_bdevs_discovered": 2, 00:15:00.643 "num_base_bdevs_operational": 3, 00:15:00.643 "base_bdevs_list": [ 00:15:00.643 { 00:15:00.643 "name": null, 00:15:00.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.643 "is_configured": false, 00:15:00.643 "data_offset": 2048, 00:15:00.643 "data_size": 63488 00:15:00.643 }, 00:15:00.643 { 00:15:00.643 "name": "pt2", 00:15:00.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.643 "is_configured": true, 00:15:00.643 "data_offset": 2048, 00:15:00.643 "data_size": 63488 00:15:00.643 }, 00:15:00.643 { 00:15:00.643 "name": "pt3", 00:15:00.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.643 "is_configured": true, 00:15:00.643 "data_offset": 2048, 00:15:00.643 "data_size": 63488 00:15:00.643 }, 00:15:00.643 { 00:15:00.643 "name": null, 00:15:00.643 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.643 "is_configured": false, 00:15:00.643 "data_offset": 2048, 00:15:00.643 "data_size": 63488 00:15:00.643 } 00:15:00.643 ] 00:15:00.643 }' 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.643 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.917 [2024-11-20 13:28:42.524538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:00.917 [2024-11-20 13:28:42.524698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.917 [2024-11-20 13:28:42.524749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:00.917 [2024-11-20 13:28:42.524804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.917 [2024-11-20 13:28:42.525331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.917 [2024-11-20 13:28:42.525409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:00.917 [2024-11-20 13:28:42.525545] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:00.917 [2024-11-20 13:28:42.525613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:00.917 [2024-11-20 13:28:42.525773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:00.917 [2024-11-20 13:28:42.525825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:00.917 [2024-11-20 13:28:42.526138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:00.917 [2024-11-20 13:28:42.526824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:00.917 [2024-11-20 13:28:42.526888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:00.917 [2024-11-20 13:28:42.527256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.917 pt4 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.917 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.917 "name": "raid_bdev1", 00:15:00.917 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:15:00.917 "strip_size_kb": 64, 00:15:00.917 "state": "online", 00:15:00.917 "raid_level": "raid5f", 00:15:00.917 "superblock": true, 00:15:00.917 "num_base_bdevs": 4, 00:15:00.917 "num_base_bdevs_discovered": 3, 00:15:00.917 "num_base_bdevs_operational": 3, 00:15:00.917 "base_bdevs_list": [ 00:15:00.917 { 00:15:00.917 "name": null, 00:15:00.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.917 "is_configured": false, 00:15:00.917 "data_offset": 2048, 00:15:00.917 "data_size": 63488 00:15:00.917 }, 00:15:00.917 { 00:15:00.917 "name": "pt2", 00:15:00.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.917 "is_configured": true, 00:15:00.917 "data_offset": 2048, 00:15:00.917 "data_size": 63488 00:15:00.917 }, 00:15:00.917 { 00:15:00.917 "name": "pt3", 00:15:00.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.918 "is_configured": true, 00:15:00.918 "data_offset": 2048, 00:15:00.918 "data_size": 63488 00:15:00.918 }, 00:15:00.918 { 00:15:00.918 "name": "pt4", 00:15:00.918 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.918 "is_configured": true, 00:15:00.918 "data_offset": 2048, 00:15:00.918 "data_size": 63488 00:15:00.918 } 00:15:00.918 ] 00:15:00.918 }' 00:15:00.918 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.918 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.487 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:01.487 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.487 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.487 [2024-11-20 13:28:42.968238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.487 [2024-11-20 13:28:42.968341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.487 [2024-11-20 13:28:42.968470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.487 [2024-11-20 13:28:42.968597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.487 [2024-11-20 13:28:42.968660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:01.487 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.487 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.487 13:28:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:01.487 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.487 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.487 13:28:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.487 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:01.487 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:01.487 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:01.487 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:01.487 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:01.487 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.487 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.488 [2024-11-20 13:28:43.044145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:01.488 [2024-11-20 13:28:43.044293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.488 [2024-11-20 13:28:43.044325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:01.488 [2024-11-20 13:28:43.044340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.488 [2024-11-20 13:28:43.046917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.488 [2024-11-20 13:28:43.046961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:01.488 [2024-11-20 13:28:43.047063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:01.488 [2024-11-20 13:28:43.047107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:01.488 [2024-11-20 13:28:43.047219] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:01.488 [2024-11-20 13:28:43.047232] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.488 [2024-11-20 13:28:43.047263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:01.488 [2024-11-20 13:28:43.047310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.488 pt1 00:15:01.488 [2024-11-20 13:28:43.047430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.488 "name": "raid_bdev1", 00:15:01.488 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:15:01.488 "strip_size_kb": 64, 00:15:01.488 "state": "configuring", 00:15:01.488 "raid_level": "raid5f", 00:15:01.488 "superblock": true, 00:15:01.488 "num_base_bdevs": 4, 00:15:01.488 "num_base_bdevs_discovered": 2, 00:15:01.488 "num_base_bdevs_operational": 3, 00:15:01.488 "base_bdevs_list": [ 00:15:01.488 { 00:15:01.488 "name": null, 00:15:01.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.488 "is_configured": false, 00:15:01.488 "data_offset": 2048, 00:15:01.488 "data_size": 63488 00:15:01.488 }, 00:15:01.488 { 00:15:01.488 "name": "pt2", 00:15:01.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.488 "is_configured": true, 00:15:01.488 "data_offset": 2048, 00:15:01.488 "data_size": 63488 00:15:01.488 }, 00:15:01.488 { 00:15:01.488 "name": "pt3", 00:15:01.488 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.488 "is_configured": true, 00:15:01.488 "data_offset": 2048, 00:15:01.488 "data_size": 63488 00:15:01.488 }, 00:15:01.488 { 00:15:01.488 "name": null, 00:15:01.488 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.488 "is_configured": false, 00:15:01.488 "data_offset": 2048, 00:15:01.488 "data_size": 63488 00:15:01.488 } 00:15:01.488 ] 00:15:01.488 }' 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.488 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.057 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:02.057 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:02.057 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.057 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.057 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.057 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:02.057 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:02.057 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.057 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.057 [2024-11-20 13:28:43.555497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:02.057 [2024-11-20 13:28:43.555635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.058 [2024-11-20 13:28:43.555704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:02.058 [2024-11-20 13:28:43.555755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.058 [2024-11-20 13:28:43.556312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.058 [2024-11-20 13:28:43.556393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:02.058 [2024-11-20 13:28:43.556535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:02.058 [2024-11-20 13:28:43.556624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:02.058 [2024-11-20 13:28:43.556811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:02.058 [2024-11-20 13:28:43.556871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:02.058 [2024-11-20 13:28:43.557236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:02.058 [2024-11-20 13:28:43.558048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:02.058 [2024-11-20 13:28:43.558132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:02.058 [2024-11-20 13:28:43.558455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.058 pt4 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.058 "name": "raid_bdev1", 00:15:02.058 "uuid": "c28cefb6-4dce-4ec7-94a0-759e3252b0cf", 00:15:02.058 "strip_size_kb": 64, 00:15:02.058 "state": "online", 00:15:02.058 "raid_level": "raid5f", 00:15:02.058 "superblock": true, 00:15:02.058 "num_base_bdevs": 4, 00:15:02.058 "num_base_bdevs_discovered": 3, 00:15:02.058 "num_base_bdevs_operational": 3, 00:15:02.058 "base_bdevs_list": [ 00:15:02.058 { 00:15:02.058 "name": null, 00:15:02.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.058 "is_configured": false, 00:15:02.058 "data_offset": 2048, 00:15:02.058 "data_size": 63488 00:15:02.058 }, 00:15:02.058 { 00:15:02.058 "name": "pt2", 00:15:02.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.058 "is_configured": true, 00:15:02.058 "data_offset": 2048, 00:15:02.058 "data_size": 63488 00:15:02.058 }, 00:15:02.058 { 00:15:02.058 "name": "pt3", 00:15:02.058 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.058 "is_configured": true, 00:15:02.058 "data_offset": 2048, 00:15:02.058 "data_size": 63488 00:15:02.058 }, 00:15:02.058 { 00:15:02.058 "name": "pt4", 00:15:02.058 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.058 "is_configured": true, 00:15:02.058 "data_offset": 2048, 00:15:02.058 "data_size": 63488 00:15:02.058 } 00:15:02.058 ] 00:15:02.058 }' 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.058 13:28:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.627 [2024-11-20 13:28:44.087793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c28cefb6-4dce-4ec7-94a0-759e3252b0cf '!=' c28cefb6-4dce-4ec7-94a0-759e3252b0cf ']' 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94303 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 94303 ']' 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 94303 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94303 00:15:02.627 killing process with pid 94303 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94303' 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 94303 00:15:02.627 [2024-11-20 13:28:44.162183] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.627 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 94303 00:15:02.628 [2024-11-20 13:28:44.162304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.628 [2024-11-20 13:28:44.162400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.628 [2024-11-20 13:28:44.162412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:02.628 [2024-11-20 13:28:44.206794] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.888 13:28:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:02.888 00:15:02.888 real 0m7.435s 00:15:02.888 user 0m12.548s 00:15:02.888 sys 0m1.604s 00:15:02.888 ************************************ 00:15:02.888 END TEST raid5f_superblock_test 00:15:02.888 ************************************ 00:15:02.888 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.888 13:28:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.888 13:28:44 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:02.888 13:28:44 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:02.888 13:28:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:02.888 13:28:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.888 13:28:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.888 ************************************ 00:15:02.888 START TEST raid5f_rebuild_test 00:15:02.888 ************************************ 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94783 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94783 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 94783 ']' 00:15:02.888 13:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.889 13:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.889 13:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.889 13:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.889 13:28:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.148 [2024-11-20 13:28:44.597696] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:03.148 [2024-11-20 13:28:44.597937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:03.148 Zero copy mechanism will not be used. 00:15:03.148 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94783 ] 00:15:03.148 [2024-11-20 13:28:44.737082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.148 [2024-11-20 13:28:44.766614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.148 [2024-11-20 13:28:44.809926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.148 [2024-11-20 13:28:44.810075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.089 BaseBdev1_malloc 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.089 [2024-11-20 13:28:45.504341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:04.089 [2024-11-20 13:28:45.504456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.089 [2024-11-20 13:28:45.504507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:04.089 [2024-11-20 13:28:45.504572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.089 [2024-11-20 13:28:45.506941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.089 [2024-11-20 13:28:45.507031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.089 BaseBdev1 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.089 BaseBdev2_malloc 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.089 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.089 [2024-11-20 13:28:45.533382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:04.089 [2024-11-20 13:28:45.533502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.089 [2024-11-20 13:28:45.533551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:04.089 [2024-11-20 13:28:45.533613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.089 [2024-11-20 13:28:45.536152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.089 [2024-11-20 13:28:45.536245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:04.089 BaseBdev2 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.090 BaseBdev3_malloc 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.090 [2024-11-20 13:28:45.562463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:04.090 [2024-11-20 13:28:45.562592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.090 [2024-11-20 13:28:45.562651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:04.090 [2024-11-20 13:28:45.562713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.090 [2024-11-20 13:28:45.564955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.090 [2024-11-20 13:28:45.565034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:04.090 BaseBdev3 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.090 BaseBdev4_malloc 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.090 [2024-11-20 13:28:45.601346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:04.090 [2024-11-20 13:28:45.601436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.090 [2024-11-20 13:28:45.601476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:04.090 [2024-11-20 13:28:45.601506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.090 [2024-11-20 13:28:45.603633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.090 [2024-11-20 13:28:45.603705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:04.090 BaseBdev4 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.090 spare_malloc 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.090 spare_delay 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.090 [2024-11-20 13:28:45.641819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:04.090 [2024-11-20 13:28:45.641871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.090 [2024-11-20 13:28:45.641889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:04.090 [2024-11-20 13:28:45.641898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.090 [2024-11-20 13:28:45.644155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.090 [2024-11-20 13:28:45.644192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:04.090 spare 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.090 [2024-11-20 13:28:45.653905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.090 [2024-11-20 13:28:45.655979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.090 [2024-11-20 13:28:45.656117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.090 [2024-11-20 13:28:45.656211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:04.090 [2024-11-20 13:28:45.656346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:04.090 [2024-11-20 13:28:45.656391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:04.090 [2024-11-20 13:28:45.656692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:04.090 [2024-11-20 13:28:45.657202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:04.090 [2024-11-20 13:28:45.657252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:04.090 [2024-11-20 13:28:45.657417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.090 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.090 "name": "raid_bdev1", 00:15:04.090 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:04.090 "strip_size_kb": 64, 00:15:04.090 "state": "online", 00:15:04.090 "raid_level": "raid5f", 00:15:04.090 "superblock": false, 00:15:04.090 "num_base_bdevs": 4, 00:15:04.090 "num_base_bdevs_discovered": 4, 00:15:04.090 "num_base_bdevs_operational": 4, 00:15:04.090 "base_bdevs_list": [ 00:15:04.090 { 00:15:04.090 "name": "BaseBdev1", 00:15:04.090 "uuid": "64397bb8-dad3-5ef4-8abf-0042c32d4aca", 00:15:04.090 "is_configured": true, 00:15:04.090 "data_offset": 0, 00:15:04.090 "data_size": 65536 00:15:04.090 }, 00:15:04.090 { 00:15:04.090 "name": "BaseBdev2", 00:15:04.090 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:04.090 "is_configured": true, 00:15:04.090 "data_offset": 0, 00:15:04.090 "data_size": 65536 00:15:04.090 }, 00:15:04.090 { 00:15:04.090 "name": "BaseBdev3", 00:15:04.090 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:04.090 "is_configured": true, 00:15:04.091 "data_offset": 0, 00:15:04.091 "data_size": 65536 00:15:04.091 }, 00:15:04.091 { 00:15:04.091 "name": "BaseBdev4", 00:15:04.091 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:04.091 "is_configured": true, 00:15:04.091 "data_offset": 0, 00:15:04.091 "data_size": 65536 00:15:04.091 } 00:15:04.091 ] 00:15:04.091 }' 00:15:04.091 13:28:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.091 13:28:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.661 [2024-11-20 13:28:46.134658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.661 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:04.921 [2024-11-20 13:28:46.417976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:04.921 /dev/nbd0 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.921 1+0 records in 00:15:04.921 1+0 records out 00:15:04.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471345 s, 8.7 MB/s 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:04.921 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:05.490 512+0 records in 00:15:05.490 512+0 records out 00:15:05.490 100663296 bytes (101 MB, 96 MiB) copied, 0.455802 s, 221 MB/s 00:15:05.490 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:05.490 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.490 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:05.490 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.490 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:05.490 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.490 13:28:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.750 [2024-11-20 13:28:47.180830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.750 [2024-11-20 13:28:47.192897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.750 "name": "raid_bdev1", 00:15:05.750 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:05.750 "strip_size_kb": 64, 00:15:05.750 "state": "online", 00:15:05.750 "raid_level": "raid5f", 00:15:05.750 "superblock": false, 00:15:05.750 "num_base_bdevs": 4, 00:15:05.750 "num_base_bdevs_discovered": 3, 00:15:05.750 "num_base_bdevs_operational": 3, 00:15:05.750 "base_bdevs_list": [ 00:15:05.750 { 00:15:05.750 "name": null, 00:15:05.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.750 "is_configured": false, 00:15:05.750 "data_offset": 0, 00:15:05.750 "data_size": 65536 00:15:05.750 }, 00:15:05.750 { 00:15:05.750 "name": "BaseBdev2", 00:15:05.750 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:05.750 "is_configured": true, 00:15:05.750 "data_offset": 0, 00:15:05.750 "data_size": 65536 00:15:05.750 }, 00:15:05.750 { 00:15:05.750 "name": "BaseBdev3", 00:15:05.750 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:05.750 "is_configured": true, 00:15:05.750 "data_offset": 0, 00:15:05.750 "data_size": 65536 00:15:05.750 }, 00:15:05.750 { 00:15:05.750 "name": "BaseBdev4", 00:15:05.750 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:05.750 "is_configured": true, 00:15:05.750 "data_offset": 0, 00:15:05.750 "data_size": 65536 00:15:05.750 } 00:15:05.750 ] 00:15:05.750 }' 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.750 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.023 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.023 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.023 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.023 [2024-11-20 13:28:47.536352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.023 [2024-11-20 13:28:47.540854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:15:06.023 13:28:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.023 13:28:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:06.023 [2024-11-20 13:28:47.543395] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.972 "name": "raid_bdev1", 00:15:06.972 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:06.972 "strip_size_kb": 64, 00:15:06.972 "state": "online", 00:15:06.972 "raid_level": "raid5f", 00:15:06.972 "superblock": false, 00:15:06.972 "num_base_bdevs": 4, 00:15:06.972 "num_base_bdevs_discovered": 4, 00:15:06.972 "num_base_bdevs_operational": 4, 00:15:06.972 "process": { 00:15:06.972 "type": "rebuild", 00:15:06.972 "target": "spare", 00:15:06.972 "progress": { 00:15:06.972 "blocks": 19200, 00:15:06.972 "percent": 9 00:15:06.972 } 00:15:06.972 }, 00:15:06.972 "base_bdevs_list": [ 00:15:06.972 { 00:15:06.972 "name": "spare", 00:15:06.972 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:06.972 "is_configured": true, 00:15:06.972 "data_offset": 0, 00:15:06.972 "data_size": 65536 00:15:06.972 }, 00:15:06.972 { 00:15:06.972 "name": "BaseBdev2", 00:15:06.972 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:06.972 "is_configured": true, 00:15:06.972 "data_offset": 0, 00:15:06.972 "data_size": 65536 00:15:06.972 }, 00:15:06.972 { 00:15:06.972 "name": "BaseBdev3", 00:15:06.972 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:06.972 "is_configured": true, 00:15:06.972 "data_offset": 0, 00:15:06.972 "data_size": 65536 00:15:06.972 }, 00:15:06.972 { 00:15:06.972 "name": "BaseBdev4", 00:15:06.972 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:06.972 "is_configured": true, 00:15:06.972 "data_offset": 0, 00:15:06.972 "data_size": 65536 00:15:06.972 } 00:15:06.972 ] 00:15:06.972 }' 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.972 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.232 [2024-11-20 13:28:48.659684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:07.232 [2024-11-20 13:28:48.751726] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:07.232 [2024-11-20 13:28:48.751930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.232 [2024-11-20 13:28:48.751985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:07.232 [2024-11-20 13:28:48.752035] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.232 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.233 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.233 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.233 "name": "raid_bdev1", 00:15:07.233 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:07.233 "strip_size_kb": 64, 00:15:07.233 "state": "online", 00:15:07.233 "raid_level": "raid5f", 00:15:07.233 "superblock": false, 00:15:07.233 "num_base_bdevs": 4, 00:15:07.233 "num_base_bdevs_discovered": 3, 00:15:07.233 "num_base_bdevs_operational": 3, 00:15:07.233 "base_bdevs_list": [ 00:15:07.233 { 00:15:07.233 "name": null, 00:15:07.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.233 "is_configured": false, 00:15:07.233 "data_offset": 0, 00:15:07.233 "data_size": 65536 00:15:07.233 }, 00:15:07.233 { 00:15:07.233 "name": "BaseBdev2", 00:15:07.233 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:07.233 "is_configured": true, 00:15:07.233 "data_offset": 0, 00:15:07.233 "data_size": 65536 00:15:07.233 }, 00:15:07.233 { 00:15:07.233 "name": "BaseBdev3", 00:15:07.233 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:07.233 "is_configured": true, 00:15:07.233 "data_offset": 0, 00:15:07.233 "data_size": 65536 00:15:07.233 }, 00:15:07.233 { 00:15:07.233 "name": "BaseBdev4", 00:15:07.233 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:07.233 "is_configured": true, 00:15:07.233 "data_offset": 0, 00:15:07.233 "data_size": 65536 00:15:07.233 } 00:15:07.233 ] 00:15:07.233 }' 00:15:07.233 13:28:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.233 13:28:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.803 "name": "raid_bdev1", 00:15:07.803 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:07.803 "strip_size_kb": 64, 00:15:07.803 "state": "online", 00:15:07.803 "raid_level": "raid5f", 00:15:07.803 "superblock": false, 00:15:07.803 "num_base_bdevs": 4, 00:15:07.803 "num_base_bdevs_discovered": 3, 00:15:07.803 "num_base_bdevs_operational": 3, 00:15:07.803 "base_bdevs_list": [ 00:15:07.803 { 00:15:07.803 "name": null, 00:15:07.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.803 "is_configured": false, 00:15:07.803 "data_offset": 0, 00:15:07.803 "data_size": 65536 00:15:07.803 }, 00:15:07.803 { 00:15:07.803 "name": "BaseBdev2", 00:15:07.803 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:07.803 "is_configured": true, 00:15:07.803 "data_offset": 0, 00:15:07.803 "data_size": 65536 00:15:07.803 }, 00:15:07.803 { 00:15:07.803 "name": "BaseBdev3", 00:15:07.803 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:07.803 "is_configured": true, 00:15:07.803 "data_offset": 0, 00:15:07.803 "data_size": 65536 00:15:07.803 }, 00:15:07.803 { 00:15:07.803 "name": "BaseBdev4", 00:15:07.803 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:07.803 "is_configured": true, 00:15:07.803 "data_offset": 0, 00:15:07.803 "data_size": 65536 00:15:07.803 } 00:15:07.803 ] 00:15:07.803 }' 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.803 [2024-11-20 13:28:49.353479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.803 [2024-11-20 13:28:49.357634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.803 13:28:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:07.803 [2024-11-20 13:28:49.359897] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:08.743 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.743 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.743 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.743 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.743 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.743 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.743 13:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.743 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.743 13:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.743 13:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.003 "name": "raid_bdev1", 00:15:09.003 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:09.003 "strip_size_kb": 64, 00:15:09.003 "state": "online", 00:15:09.003 "raid_level": "raid5f", 00:15:09.003 "superblock": false, 00:15:09.003 "num_base_bdevs": 4, 00:15:09.003 "num_base_bdevs_discovered": 4, 00:15:09.003 "num_base_bdevs_operational": 4, 00:15:09.003 "process": { 00:15:09.003 "type": "rebuild", 00:15:09.003 "target": "spare", 00:15:09.003 "progress": { 00:15:09.003 "blocks": 19200, 00:15:09.003 "percent": 9 00:15:09.003 } 00:15:09.003 }, 00:15:09.003 "base_bdevs_list": [ 00:15:09.003 { 00:15:09.003 "name": "spare", 00:15:09.003 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:09.003 "is_configured": true, 00:15:09.003 "data_offset": 0, 00:15:09.003 "data_size": 65536 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "name": "BaseBdev2", 00:15:09.003 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:09.003 "is_configured": true, 00:15:09.003 "data_offset": 0, 00:15:09.003 "data_size": 65536 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "name": "BaseBdev3", 00:15:09.003 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:09.003 "is_configured": true, 00:15:09.003 "data_offset": 0, 00:15:09.003 "data_size": 65536 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "name": "BaseBdev4", 00:15:09.003 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:09.003 "is_configured": true, 00:15:09.003 "data_offset": 0, 00:15:09.003 "data_size": 65536 00:15:09.003 } 00:15:09.003 ] 00:15:09.003 }' 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=519 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.003 "name": "raid_bdev1", 00:15:09.003 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:09.003 "strip_size_kb": 64, 00:15:09.003 "state": "online", 00:15:09.003 "raid_level": "raid5f", 00:15:09.003 "superblock": false, 00:15:09.003 "num_base_bdevs": 4, 00:15:09.003 "num_base_bdevs_discovered": 4, 00:15:09.003 "num_base_bdevs_operational": 4, 00:15:09.003 "process": { 00:15:09.003 "type": "rebuild", 00:15:09.003 "target": "spare", 00:15:09.003 "progress": { 00:15:09.003 "blocks": 21120, 00:15:09.003 "percent": 10 00:15:09.003 } 00:15:09.003 }, 00:15:09.003 "base_bdevs_list": [ 00:15:09.003 { 00:15:09.003 "name": "spare", 00:15:09.003 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:09.003 "is_configured": true, 00:15:09.003 "data_offset": 0, 00:15:09.003 "data_size": 65536 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "name": "BaseBdev2", 00:15:09.003 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:09.003 "is_configured": true, 00:15:09.003 "data_offset": 0, 00:15:09.003 "data_size": 65536 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "name": "BaseBdev3", 00:15:09.003 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:09.003 "is_configured": true, 00:15:09.003 "data_offset": 0, 00:15:09.003 "data_size": 65536 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "name": "BaseBdev4", 00:15:09.003 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:09.003 "is_configured": true, 00:15:09.003 "data_offset": 0, 00:15:09.003 "data_size": 65536 00:15:09.003 } 00:15:09.003 ] 00:15:09.003 }' 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.003 13:28:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.381 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.381 "name": "raid_bdev1", 00:15:10.381 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:10.381 "strip_size_kb": 64, 00:15:10.381 "state": "online", 00:15:10.381 "raid_level": "raid5f", 00:15:10.381 "superblock": false, 00:15:10.381 "num_base_bdevs": 4, 00:15:10.381 "num_base_bdevs_discovered": 4, 00:15:10.381 "num_base_bdevs_operational": 4, 00:15:10.381 "process": { 00:15:10.382 "type": "rebuild", 00:15:10.382 "target": "spare", 00:15:10.382 "progress": { 00:15:10.382 "blocks": 42240, 00:15:10.382 "percent": 21 00:15:10.382 } 00:15:10.382 }, 00:15:10.382 "base_bdevs_list": [ 00:15:10.382 { 00:15:10.382 "name": "spare", 00:15:10.382 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:10.382 "is_configured": true, 00:15:10.382 "data_offset": 0, 00:15:10.382 "data_size": 65536 00:15:10.382 }, 00:15:10.382 { 00:15:10.382 "name": "BaseBdev2", 00:15:10.382 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:10.382 "is_configured": true, 00:15:10.382 "data_offset": 0, 00:15:10.382 "data_size": 65536 00:15:10.382 }, 00:15:10.382 { 00:15:10.382 "name": "BaseBdev3", 00:15:10.382 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:10.382 "is_configured": true, 00:15:10.382 "data_offset": 0, 00:15:10.382 "data_size": 65536 00:15:10.382 }, 00:15:10.382 { 00:15:10.382 "name": "BaseBdev4", 00:15:10.382 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:10.382 "is_configured": true, 00:15:10.382 "data_offset": 0, 00:15:10.382 "data_size": 65536 00:15:10.382 } 00:15:10.382 ] 00:15:10.382 }' 00:15:10.382 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.382 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.382 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.382 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.382 13:28:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.381 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.382 "name": "raid_bdev1", 00:15:11.382 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:11.382 "strip_size_kb": 64, 00:15:11.382 "state": "online", 00:15:11.382 "raid_level": "raid5f", 00:15:11.382 "superblock": false, 00:15:11.382 "num_base_bdevs": 4, 00:15:11.382 "num_base_bdevs_discovered": 4, 00:15:11.382 "num_base_bdevs_operational": 4, 00:15:11.382 "process": { 00:15:11.382 "type": "rebuild", 00:15:11.382 "target": "spare", 00:15:11.382 "progress": { 00:15:11.382 "blocks": 65280, 00:15:11.382 "percent": 33 00:15:11.382 } 00:15:11.382 }, 00:15:11.382 "base_bdevs_list": [ 00:15:11.382 { 00:15:11.382 "name": "spare", 00:15:11.382 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:11.382 "is_configured": true, 00:15:11.382 "data_offset": 0, 00:15:11.382 "data_size": 65536 00:15:11.382 }, 00:15:11.382 { 00:15:11.382 "name": "BaseBdev2", 00:15:11.382 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:11.382 "is_configured": true, 00:15:11.382 "data_offset": 0, 00:15:11.382 "data_size": 65536 00:15:11.382 }, 00:15:11.382 { 00:15:11.382 "name": "BaseBdev3", 00:15:11.382 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:11.382 "is_configured": true, 00:15:11.382 "data_offset": 0, 00:15:11.382 "data_size": 65536 00:15:11.382 }, 00:15:11.382 { 00:15:11.382 "name": "BaseBdev4", 00:15:11.382 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:11.382 "is_configured": true, 00:15:11.382 "data_offset": 0, 00:15:11.382 "data_size": 65536 00:15:11.382 } 00:15:11.382 ] 00:15:11.382 }' 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.382 13:28:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.319 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.319 "name": "raid_bdev1", 00:15:12.319 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:12.319 "strip_size_kb": 64, 00:15:12.319 "state": "online", 00:15:12.319 "raid_level": "raid5f", 00:15:12.319 "superblock": false, 00:15:12.319 "num_base_bdevs": 4, 00:15:12.319 "num_base_bdevs_discovered": 4, 00:15:12.319 "num_base_bdevs_operational": 4, 00:15:12.319 "process": { 00:15:12.319 "type": "rebuild", 00:15:12.319 "target": "spare", 00:15:12.319 "progress": { 00:15:12.319 "blocks": 86400, 00:15:12.319 "percent": 43 00:15:12.319 } 00:15:12.319 }, 00:15:12.319 "base_bdevs_list": [ 00:15:12.319 { 00:15:12.319 "name": "spare", 00:15:12.319 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:12.319 "is_configured": true, 00:15:12.319 "data_offset": 0, 00:15:12.319 "data_size": 65536 00:15:12.319 }, 00:15:12.319 { 00:15:12.319 "name": "BaseBdev2", 00:15:12.319 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:12.319 "is_configured": true, 00:15:12.319 "data_offset": 0, 00:15:12.319 "data_size": 65536 00:15:12.319 }, 00:15:12.319 { 00:15:12.319 "name": "BaseBdev3", 00:15:12.319 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:12.319 "is_configured": true, 00:15:12.319 "data_offset": 0, 00:15:12.319 "data_size": 65536 00:15:12.319 }, 00:15:12.319 { 00:15:12.320 "name": "BaseBdev4", 00:15:12.320 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:12.320 "is_configured": true, 00:15:12.320 "data_offset": 0, 00:15:12.320 "data_size": 65536 00:15:12.320 } 00:15:12.320 ] 00:15:12.320 }' 00:15:12.320 13:28:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.578 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.578 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.579 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.579 13:28:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.515 "name": "raid_bdev1", 00:15:13.515 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:13.515 "strip_size_kb": 64, 00:15:13.515 "state": "online", 00:15:13.515 "raid_level": "raid5f", 00:15:13.515 "superblock": false, 00:15:13.515 "num_base_bdevs": 4, 00:15:13.515 "num_base_bdevs_discovered": 4, 00:15:13.515 "num_base_bdevs_operational": 4, 00:15:13.515 "process": { 00:15:13.515 "type": "rebuild", 00:15:13.515 "target": "spare", 00:15:13.515 "progress": { 00:15:13.515 "blocks": 107520, 00:15:13.515 "percent": 54 00:15:13.515 } 00:15:13.515 }, 00:15:13.515 "base_bdevs_list": [ 00:15:13.515 { 00:15:13.515 "name": "spare", 00:15:13.515 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:13.515 "is_configured": true, 00:15:13.515 "data_offset": 0, 00:15:13.515 "data_size": 65536 00:15:13.515 }, 00:15:13.515 { 00:15:13.515 "name": "BaseBdev2", 00:15:13.515 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:13.515 "is_configured": true, 00:15:13.515 "data_offset": 0, 00:15:13.515 "data_size": 65536 00:15:13.515 }, 00:15:13.515 { 00:15:13.515 "name": "BaseBdev3", 00:15:13.515 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:13.515 "is_configured": true, 00:15:13.515 "data_offset": 0, 00:15:13.515 "data_size": 65536 00:15:13.515 }, 00:15:13.515 { 00:15:13.515 "name": "BaseBdev4", 00:15:13.515 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:13.515 "is_configured": true, 00:15:13.515 "data_offset": 0, 00:15:13.515 "data_size": 65536 00:15:13.515 } 00:15:13.515 ] 00:15:13.515 }' 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.515 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.775 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.775 13:28:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.712 "name": "raid_bdev1", 00:15:14.712 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:14.712 "strip_size_kb": 64, 00:15:14.712 "state": "online", 00:15:14.712 "raid_level": "raid5f", 00:15:14.712 "superblock": false, 00:15:14.712 "num_base_bdevs": 4, 00:15:14.712 "num_base_bdevs_discovered": 4, 00:15:14.712 "num_base_bdevs_operational": 4, 00:15:14.712 "process": { 00:15:14.712 "type": "rebuild", 00:15:14.712 "target": "spare", 00:15:14.712 "progress": { 00:15:14.712 "blocks": 128640, 00:15:14.712 "percent": 65 00:15:14.712 } 00:15:14.712 }, 00:15:14.712 "base_bdevs_list": [ 00:15:14.712 { 00:15:14.712 "name": "spare", 00:15:14.712 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:14.712 "is_configured": true, 00:15:14.712 "data_offset": 0, 00:15:14.712 "data_size": 65536 00:15:14.712 }, 00:15:14.712 { 00:15:14.712 "name": "BaseBdev2", 00:15:14.712 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:14.712 "is_configured": true, 00:15:14.712 "data_offset": 0, 00:15:14.712 "data_size": 65536 00:15:14.712 }, 00:15:14.712 { 00:15:14.712 "name": "BaseBdev3", 00:15:14.712 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:14.712 "is_configured": true, 00:15:14.712 "data_offset": 0, 00:15:14.712 "data_size": 65536 00:15:14.712 }, 00:15:14.712 { 00:15:14.712 "name": "BaseBdev4", 00:15:14.712 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:14.712 "is_configured": true, 00:15:14.712 "data_offset": 0, 00:15:14.712 "data_size": 65536 00:15:14.712 } 00:15:14.712 ] 00:15:14.712 }' 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.712 13:28:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.090 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.090 "name": "raid_bdev1", 00:15:16.090 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:16.090 "strip_size_kb": 64, 00:15:16.090 "state": "online", 00:15:16.090 "raid_level": "raid5f", 00:15:16.090 "superblock": false, 00:15:16.090 "num_base_bdevs": 4, 00:15:16.090 "num_base_bdevs_discovered": 4, 00:15:16.090 "num_base_bdevs_operational": 4, 00:15:16.090 "process": { 00:15:16.090 "type": "rebuild", 00:15:16.090 "target": "spare", 00:15:16.090 "progress": { 00:15:16.090 "blocks": 151680, 00:15:16.090 "percent": 77 00:15:16.090 } 00:15:16.090 }, 00:15:16.090 "base_bdevs_list": [ 00:15:16.090 { 00:15:16.090 "name": "spare", 00:15:16.090 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:16.091 "is_configured": true, 00:15:16.091 "data_offset": 0, 00:15:16.091 "data_size": 65536 00:15:16.091 }, 00:15:16.091 { 00:15:16.091 "name": "BaseBdev2", 00:15:16.091 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:16.091 "is_configured": true, 00:15:16.091 "data_offset": 0, 00:15:16.091 "data_size": 65536 00:15:16.091 }, 00:15:16.091 { 00:15:16.091 "name": "BaseBdev3", 00:15:16.091 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:16.091 "is_configured": true, 00:15:16.091 "data_offset": 0, 00:15:16.091 "data_size": 65536 00:15:16.091 }, 00:15:16.091 { 00:15:16.091 "name": "BaseBdev4", 00:15:16.091 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:16.091 "is_configured": true, 00:15:16.091 "data_offset": 0, 00:15:16.091 "data_size": 65536 00:15:16.091 } 00:15:16.091 ] 00:15:16.091 }' 00:15:16.091 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.091 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.091 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.091 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.091 13:28:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.027 "name": "raid_bdev1", 00:15:17.027 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:17.027 "strip_size_kb": 64, 00:15:17.027 "state": "online", 00:15:17.027 "raid_level": "raid5f", 00:15:17.027 "superblock": false, 00:15:17.027 "num_base_bdevs": 4, 00:15:17.027 "num_base_bdevs_discovered": 4, 00:15:17.027 "num_base_bdevs_operational": 4, 00:15:17.027 "process": { 00:15:17.027 "type": "rebuild", 00:15:17.027 "target": "spare", 00:15:17.027 "progress": { 00:15:17.027 "blocks": 172800, 00:15:17.027 "percent": 87 00:15:17.027 } 00:15:17.027 }, 00:15:17.027 "base_bdevs_list": [ 00:15:17.027 { 00:15:17.027 "name": "spare", 00:15:17.027 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:17.027 "is_configured": true, 00:15:17.027 "data_offset": 0, 00:15:17.027 "data_size": 65536 00:15:17.027 }, 00:15:17.027 { 00:15:17.027 "name": "BaseBdev2", 00:15:17.027 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:17.027 "is_configured": true, 00:15:17.027 "data_offset": 0, 00:15:17.027 "data_size": 65536 00:15:17.027 }, 00:15:17.027 { 00:15:17.027 "name": "BaseBdev3", 00:15:17.027 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:17.027 "is_configured": true, 00:15:17.027 "data_offset": 0, 00:15:17.027 "data_size": 65536 00:15:17.027 }, 00:15:17.027 { 00:15:17.027 "name": "BaseBdev4", 00:15:17.027 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:17.027 "is_configured": true, 00:15:17.027 "data_offset": 0, 00:15:17.027 "data_size": 65536 00:15:17.027 } 00:15:17.027 ] 00:15:17.027 }' 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.027 13:28:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.415 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.415 "name": "raid_bdev1", 00:15:18.415 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:18.415 "strip_size_kb": 64, 00:15:18.415 "state": "online", 00:15:18.415 "raid_level": "raid5f", 00:15:18.415 "superblock": false, 00:15:18.415 "num_base_bdevs": 4, 00:15:18.415 "num_base_bdevs_discovered": 4, 00:15:18.415 "num_base_bdevs_operational": 4, 00:15:18.415 "process": { 00:15:18.415 "type": "rebuild", 00:15:18.416 "target": "spare", 00:15:18.416 "progress": { 00:15:18.416 "blocks": 195840, 00:15:18.416 "percent": 99 00:15:18.416 } 00:15:18.416 }, 00:15:18.416 "base_bdevs_list": [ 00:15:18.416 { 00:15:18.416 "name": "spare", 00:15:18.416 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:18.416 "is_configured": true, 00:15:18.416 "data_offset": 0, 00:15:18.416 "data_size": 65536 00:15:18.416 }, 00:15:18.416 { 00:15:18.416 "name": "BaseBdev2", 00:15:18.416 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:18.416 "is_configured": true, 00:15:18.416 "data_offset": 0, 00:15:18.416 "data_size": 65536 00:15:18.416 }, 00:15:18.416 { 00:15:18.416 "name": "BaseBdev3", 00:15:18.416 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:18.416 "is_configured": true, 00:15:18.416 "data_offset": 0, 00:15:18.416 "data_size": 65536 00:15:18.416 }, 00:15:18.416 { 00:15:18.416 "name": "BaseBdev4", 00:15:18.416 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:18.416 "is_configured": true, 00:15:18.416 "data_offset": 0, 00:15:18.416 "data_size": 65536 00:15:18.416 } 00:15:18.416 ] 00:15:18.416 }' 00:15:18.416 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.416 [2024-11-20 13:28:59.733097] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:18.416 [2024-11-20 13:28:59.733210] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:18.416 [2024-11-20 13:28:59.733261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.416 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.416 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.416 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.416 13:28:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.350 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.350 "name": "raid_bdev1", 00:15:19.350 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:19.350 "strip_size_kb": 64, 00:15:19.350 "state": "online", 00:15:19.350 "raid_level": "raid5f", 00:15:19.350 "superblock": false, 00:15:19.350 "num_base_bdevs": 4, 00:15:19.350 "num_base_bdevs_discovered": 4, 00:15:19.350 "num_base_bdevs_operational": 4, 00:15:19.350 "base_bdevs_list": [ 00:15:19.350 { 00:15:19.350 "name": "spare", 00:15:19.350 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:19.350 "is_configured": true, 00:15:19.350 "data_offset": 0, 00:15:19.350 "data_size": 65536 00:15:19.350 }, 00:15:19.350 { 00:15:19.350 "name": "BaseBdev2", 00:15:19.350 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:19.350 "is_configured": true, 00:15:19.350 "data_offset": 0, 00:15:19.350 "data_size": 65536 00:15:19.350 }, 00:15:19.350 { 00:15:19.350 "name": "BaseBdev3", 00:15:19.350 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:19.350 "is_configured": true, 00:15:19.350 "data_offset": 0, 00:15:19.350 "data_size": 65536 00:15:19.350 }, 00:15:19.350 { 00:15:19.350 "name": "BaseBdev4", 00:15:19.350 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:19.350 "is_configured": true, 00:15:19.351 "data_offset": 0, 00:15:19.351 "data_size": 65536 00:15:19.351 } 00:15:19.351 ] 00:15:19.351 }' 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.351 13:29:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.351 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.351 "name": "raid_bdev1", 00:15:19.351 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:19.351 "strip_size_kb": 64, 00:15:19.351 "state": "online", 00:15:19.351 "raid_level": "raid5f", 00:15:19.351 "superblock": false, 00:15:19.351 "num_base_bdevs": 4, 00:15:19.351 "num_base_bdevs_discovered": 4, 00:15:19.351 "num_base_bdevs_operational": 4, 00:15:19.351 "base_bdevs_list": [ 00:15:19.351 { 00:15:19.351 "name": "spare", 00:15:19.351 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:19.351 "is_configured": true, 00:15:19.351 "data_offset": 0, 00:15:19.351 "data_size": 65536 00:15:19.351 }, 00:15:19.351 { 00:15:19.351 "name": "BaseBdev2", 00:15:19.351 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:19.351 "is_configured": true, 00:15:19.351 "data_offset": 0, 00:15:19.351 "data_size": 65536 00:15:19.351 }, 00:15:19.351 { 00:15:19.351 "name": "BaseBdev3", 00:15:19.351 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:19.351 "is_configured": true, 00:15:19.351 "data_offset": 0, 00:15:19.351 "data_size": 65536 00:15:19.351 }, 00:15:19.351 { 00:15:19.351 "name": "BaseBdev4", 00:15:19.351 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:19.351 "is_configured": true, 00:15:19.351 "data_offset": 0, 00:15:19.351 "data_size": 65536 00:15:19.351 } 00:15:19.351 ] 00:15:19.351 }' 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.609 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.609 "name": "raid_bdev1", 00:15:19.609 "uuid": "1646ca27-005b-41b6-9a16-96167c5e6ccd", 00:15:19.609 "strip_size_kb": 64, 00:15:19.609 "state": "online", 00:15:19.609 "raid_level": "raid5f", 00:15:19.609 "superblock": false, 00:15:19.609 "num_base_bdevs": 4, 00:15:19.609 "num_base_bdevs_discovered": 4, 00:15:19.609 "num_base_bdevs_operational": 4, 00:15:19.609 "base_bdevs_list": [ 00:15:19.609 { 00:15:19.609 "name": "spare", 00:15:19.609 "uuid": "5bf4b9a9-c15c-585e-9199-83ec435c99f1", 00:15:19.609 "is_configured": true, 00:15:19.609 "data_offset": 0, 00:15:19.609 "data_size": 65536 00:15:19.609 }, 00:15:19.609 { 00:15:19.609 "name": "BaseBdev2", 00:15:19.609 "uuid": "247a8255-e3ca-5cd9-8eaf-45f86ad29ef0", 00:15:19.609 "is_configured": true, 00:15:19.609 "data_offset": 0, 00:15:19.609 "data_size": 65536 00:15:19.609 }, 00:15:19.609 { 00:15:19.609 "name": "BaseBdev3", 00:15:19.609 "uuid": "0bb17412-cdce-5d4e-bf9a-a7e726decb98", 00:15:19.609 "is_configured": true, 00:15:19.609 "data_offset": 0, 00:15:19.609 "data_size": 65536 00:15:19.609 }, 00:15:19.609 { 00:15:19.609 "name": "BaseBdev4", 00:15:19.609 "uuid": "8f182eb2-ea34-55e7-9d46-30a5c9f5e5c6", 00:15:19.609 "is_configured": true, 00:15:19.609 "data_offset": 0, 00:15:19.609 "data_size": 65536 00:15:19.609 } 00:15:19.609 ] 00:15:19.609 }' 00:15:19.610 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.610 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.176 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.176 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.176 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.176 [2024-11-20 13:29:01.604327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.176 [2024-11-20 13:29:01.604380] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.176 [2024-11-20 13:29:01.604478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.176 [2024-11-20 13:29:01.604580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.177 [2024-11-20 13:29:01.604598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:20.177 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:20.435 /dev/nbd0 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.435 1+0 records in 00:15:20.435 1+0 records out 00:15:20.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027154 s, 15.1 MB/s 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:20.435 13:29:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:20.694 /dev/nbd1 00:15:20.694 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:20.694 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:20.694 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:20.694 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:20.694 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.694 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.694 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:20.694 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:20.694 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.694 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.695 1+0 records in 00:15:20.695 1+0 records out 00:15:20.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447256 s, 9.2 MB/s 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.695 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.953 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.953 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.953 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.953 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.953 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.953 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.953 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:20.953 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.953 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.953 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94783 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 94783 ']' 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 94783 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:21.211 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.212 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94783 00:15:21.212 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.212 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.212 killing process with pid 94783 00:15:21.212 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94783' 00:15:21.212 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 94783 00:15:21.212 Received shutdown signal, test time was about 60.000000 seconds 00:15:21.212 00:15:21.212 Latency(us) 00:15:21.212 [2024-11-20T13:29:02.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.212 [2024-11-20T13:29:02.880Z] =================================================================================================================== 00:15:21.212 [2024-11-20T13:29:02.880Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:21.212 [2024-11-20 13:29:02.832931] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.212 13:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 94783 00:15:21.471 [2024-11-20 13:29:02.884188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.471 13:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:21.471 00:15:21.471 real 0m18.577s 00:15:21.471 user 0m22.607s 00:15:21.471 sys 0m2.285s 00:15:21.471 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.471 13:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.471 ************************************ 00:15:21.471 END TEST raid5f_rebuild_test 00:15:21.471 ************************************ 00:15:21.729 13:29:03 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:21.729 13:29:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:21.730 13:29:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.730 13:29:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.730 ************************************ 00:15:21.730 START TEST raid5f_rebuild_test_sb 00:15:21.730 ************************************ 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95288 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95288 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 95288 ']' 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.730 13:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.730 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:21.730 Zero copy mechanism will not be used. 00:15:21.730 [2024-11-20 13:29:03.250777] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:21.730 [2024-11-20 13:29:03.250924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95288 ] 00:15:21.988 [2024-11-20 13:29:03.410762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.988 [2024-11-20 13:29:03.437787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.988 [2024-11-20 13:29:03.482872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.988 [2024-11-20 13:29:03.482911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.554 BaseBdev1_malloc 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.554 [2024-11-20 13:29:04.154303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:22.554 [2024-11-20 13:29:04.154406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.554 [2024-11-20 13:29:04.154470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:22.554 [2024-11-20 13:29:04.154511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.554 [2024-11-20 13:29:04.157423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.554 [2024-11-20 13:29:04.157529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:22.554 BaseBdev1 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.554 BaseBdev2_malloc 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.554 [2024-11-20 13:29:04.183699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:22.554 [2024-11-20 13:29:04.183764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.554 [2024-11-20 13:29:04.183788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:22.554 [2024-11-20 13:29:04.183798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.554 [2024-11-20 13:29:04.186141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.554 [2024-11-20 13:29:04.186188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:22.554 BaseBdev2 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.554 BaseBdev3_malloc 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.554 [2024-11-20 13:29:04.212819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:22.554 [2024-11-20 13:29:04.212884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.554 [2024-11-20 13:29:04.212910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:22.554 [2024-11-20 13:29:04.212919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.554 [2024-11-20 13:29:04.215086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.554 [2024-11-20 13:29:04.215178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:22.554 BaseBdev3 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.554 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.813 BaseBdev4_malloc 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.813 [2024-11-20 13:29:04.253401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:22.813 [2024-11-20 13:29:04.253468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.813 [2024-11-20 13:29:04.253497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:22.813 [2024-11-20 13:29:04.253507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.813 [2024-11-20 13:29:04.255937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.813 [2024-11-20 13:29:04.255978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:22.813 BaseBdev4 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.813 spare_malloc 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.813 spare_delay 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.813 [2024-11-20 13:29:04.294770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:22.813 [2024-11-20 13:29:04.294835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.813 [2024-11-20 13:29:04.294860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:22.813 [2024-11-20 13:29:04.294869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.813 [2024-11-20 13:29:04.297398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.813 [2024-11-20 13:29:04.297501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:22.813 spare 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.813 [2024-11-20 13:29:04.306869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.813 [2024-11-20 13:29:04.309080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.813 [2024-11-20 13:29:04.309171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.813 [2024-11-20 13:29:04.309231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:22.813 [2024-11-20 13:29:04.309454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:22.813 [2024-11-20 13:29:04.309473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:22.813 [2024-11-20 13:29:04.309787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:22.813 [2024-11-20 13:29:04.310338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:22.813 [2024-11-20 13:29:04.310361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:22.813 [2024-11-20 13:29:04.310527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.813 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.813 "name": "raid_bdev1", 00:15:22.813 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:22.813 "strip_size_kb": 64, 00:15:22.813 "state": "online", 00:15:22.813 "raid_level": "raid5f", 00:15:22.813 "superblock": true, 00:15:22.813 "num_base_bdevs": 4, 00:15:22.813 "num_base_bdevs_discovered": 4, 00:15:22.813 "num_base_bdevs_operational": 4, 00:15:22.813 "base_bdevs_list": [ 00:15:22.813 { 00:15:22.813 "name": "BaseBdev1", 00:15:22.813 "uuid": "0be17943-f229-5606-8adf-44090652aa43", 00:15:22.813 "is_configured": true, 00:15:22.813 "data_offset": 2048, 00:15:22.813 "data_size": 63488 00:15:22.813 }, 00:15:22.813 { 00:15:22.813 "name": "BaseBdev2", 00:15:22.813 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:22.813 "is_configured": true, 00:15:22.813 "data_offset": 2048, 00:15:22.814 "data_size": 63488 00:15:22.814 }, 00:15:22.814 { 00:15:22.814 "name": "BaseBdev3", 00:15:22.814 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:22.814 "is_configured": true, 00:15:22.814 "data_offset": 2048, 00:15:22.814 "data_size": 63488 00:15:22.814 }, 00:15:22.814 { 00:15:22.814 "name": "BaseBdev4", 00:15:22.814 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:22.814 "is_configured": true, 00:15:22.814 "data_offset": 2048, 00:15:22.814 "data_size": 63488 00:15:22.814 } 00:15:22.814 ] 00:15:22.814 }' 00:15:22.814 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.814 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.378 [2024-11-20 13:29:04.811848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:23.378 13:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:23.636 [2024-11-20 13:29:05.095202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:23.636 /dev/nbd0 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.636 1+0 records in 00:15:23.636 1+0 records out 00:15:23.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375044 s, 10.9 MB/s 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:23.636 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:24.203 496+0 records in 00:15:24.203 496+0 records out 00:15:24.203 97517568 bytes (98 MB, 93 MiB) copied, 0.517656 s, 188 MB/s 00:15:24.203 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:24.203 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:24.203 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:24.203 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:24.203 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:24.203 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:24.203 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:24.462 [2024-11-20 13:29:05.916283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.462 [2024-11-20 13:29:05.938015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.462 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.462 "name": "raid_bdev1", 00:15:24.462 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:24.463 "strip_size_kb": 64, 00:15:24.463 "state": "online", 00:15:24.463 "raid_level": "raid5f", 00:15:24.463 "superblock": true, 00:15:24.463 "num_base_bdevs": 4, 00:15:24.463 "num_base_bdevs_discovered": 3, 00:15:24.463 "num_base_bdevs_operational": 3, 00:15:24.463 "base_bdevs_list": [ 00:15:24.463 { 00:15:24.463 "name": null, 00:15:24.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.463 "is_configured": false, 00:15:24.463 "data_offset": 0, 00:15:24.463 "data_size": 63488 00:15:24.463 }, 00:15:24.463 { 00:15:24.463 "name": "BaseBdev2", 00:15:24.463 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:24.463 "is_configured": true, 00:15:24.463 "data_offset": 2048, 00:15:24.463 "data_size": 63488 00:15:24.463 }, 00:15:24.463 { 00:15:24.463 "name": "BaseBdev3", 00:15:24.463 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:24.463 "is_configured": true, 00:15:24.463 "data_offset": 2048, 00:15:24.463 "data_size": 63488 00:15:24.463 }, 00:15:24.463 { 00:15:24.463 "name": "BaseBdev4", 00:15:24.463 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:24.463 "is_configured": true, 00:15:24.463 "data_offset": 2048, 00:15:24.463 "data_size": 63488 00:15:24.463 } 00:15:24.463 ] 00:15:24.463 }' 00:15:24.463 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.463 13:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.031 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:25.031 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.031 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.031 [2024-11-20 13:29:06.421230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:25.031 [2024-11-20 13:29:06.425771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:15:25.031 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.031 13:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:25.031 [2024-11-20 13:29:06.428638] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.968 "name": "raid_bdev1", 00:15:25.968 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:25.968 "strip_size_kb": 64, 00:15:25.968 "state": "online", 00:15:25.968 "raid_level": "raid5f", 00:15:25.968 "superblock": true, 00:15:25.968 "num_base_bdevs": 4, 00:15:25.968 "num_base_bdevs_discovered": 4, 00:15:25.968 "num_base_bdevs_operational": 4, 00:15:25.968 "process": { 00:15:25.968 "type": "rebuild", 00:15:25.968 "target": "spare", 00:15:25.968 "progress": { 00:15:25.968 "blocks": 19200, 00:15:25.968 "percent": 10 00:15:25.968 } 00:15:25.968 }, 00:15:25.968 "base_bdevs_list": [ 00:15:25.968 { 00:15:25.968 "name": "spare", 00:15:25.968 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:25.968 "is_configured": true, 00:15:25.968 "data_offset": 2048, 00:15:25.968 "data_size": 63488 00:15:25.968 }, 00:15:25.968 { 00:15:25.968 "name": "BaseBdev2", 00:15:25.968 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:25.968 "is_configured": true, 00:15:25.968 "data_offset": 2048, 00:15:25.968 "data_size": 63488 00:15:25.968 }, 00:15:25.968 { 00:15:25.968 "name": "BaseBdev3", 00:15:25.968 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:25.968 "is_configured": true, 00:15:25.968 "data_offset": 2048, 00:15:25.968 "data_size": 63488 00:15:25.968 }, 00:15:25.968 { 00:15:25.968 "name": "BaseBdev4", 00:15:25.968 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:25.968 "is_configured": true, 00:15:25.968 "data_offset": 2048, 00:15:25.968 "data_size": 63488 00:15:25.968 } 00:15:25.968 ] 00:15:25.968 }' 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.968 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.968 [2024-11-20 13:29:07.589175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.226 [2024-11-20 13:29:07.638126] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:26.226 [2024-11-20 13:29:07.638208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.226 [2024-11-20 13:29:07.638233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.226 [2024-11-20 13:29:07.638245] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.226 "name": "raid_bdev1", 00:15:26.226 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:26.226 "strip_size_kb": 64, 00:15:26.226 "state": "online", 00:15:26.226 "raid_level": "raid5f", 00:15:26.226 "superblock": true, 00:15:26.226 "num_base_bdevs": 4, 00:15:26.226 "num_base_bdevs_discovered": 3, 00:15:26.226 "num_base_bdevs_operational": 3, 00:15:26.226 "base_bdevs_list": [ 00:15:26.226 { 00:15:26.226 "name": null, 00:15:26.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.226 "is_configured": false, 00:15:26.226 "data_offset": 0, 00:15:26.226 "data_size": 63488 00:15:26.226 }, 00:15:26.226 { 00:15:26.226 "name": "BaseBdev2", 00:15:26.226 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:26.226 "is_configured": true, 00:15:26.226 "data_offset": 2048, 00:15:26.226 "data_size": 63488 00:15:26.226 }, 00:15:26.226 { 00:15:26.226 "name": "BaseBdev3", 00:15:26.226 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:26.226 "is_configured": true, 00:15:26.226 "data_offset": 2048, 00:15:26.226 "data_size": 63488 00:15:26.226 }, 00:15:26.226 { 00:15:26.226 "name": "BaseBdev4", 00:15:26.226 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:26.226 "is_configured": true, 00:15:26.226 "data_offset": 2048, 00:15:26.226 "data_size": 63488 00:15:26.226 } 00:15:26.226 ] 00:15:26.226 }' 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.226 13:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.486 "name": "raid_bdev1", 00:15:26.486 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:26.486 "strip_size_kb": 64, 00:15:26.486 "state": "online", 00:15:26.486 "raid_level": "raid5f", 00:15:26.486 "superblock": true, 00:15:26.486 "num_base_bdevs": 4, 00:15:26.486 "num_base_bdevs_discovered": 3, 00:15:26.486 "num_base_bdevs_operational": 3, 00:15:26.486 "base_bdevs_list": [ 00:15:26.486 { 00:15:26.486 "name": null, 00:15:26.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.486 "is_configured": false, 00:15:26.486 "data_offset": 0, 00:15:26.486 "data_size": 63488 00:15:26.486 }, 00:15:26.486 { 00:15:26.486 "name": "BaseBdev2", 00:15:26.486 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:26.486 "is_configured": true, 00:15:26.486 "data_offset": 2048, 00:15:26.486 "data_size": 63488 00:15:26.486 }, 00:15:26.486 { 00:15:26.486 "name": "BaseBdev3", 00:15:26.486 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:26.486 "is_configured": true, 00:15:26.486 "data_offset": 2048, 00:15:26.486 "data_size": 63488 00:15:26.486 }, 00:15:26.486 { 00:15:26.486 "name": "BaseBdev4", 00:15:26.486 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:26.486 "is_configured": true, 00:15:26.486 "data_offset": 2048, 00:15:26.486 "data_size": 63488 00:15:26.486 } 00:15:26.486 ] 00:15:26.486 }' 00:15:26.486 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.744 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.744 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.744 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.744 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:26.744 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.744 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.744 [2024-11-20 13:29:08.211518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.745 [2024-11-20 13:29:08.215677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:15:26.745 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.745 13:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:26.745 [2024-11-20 13:29:08.217901] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.681 "name": "raid_bdev1", 00:15:27.681 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:27.681 "strip_size_kb": 64, 00:15:27.681 "state": "online", 00:15:27.681 "raid_level": "raid5f", 00:15:27.681 "superblock": true, 00:15:27.681 "num_base_bdevs": 4, 00:15:27.681 "num_base_bdevs_discovered": 4, 00:15:27.681 "num_base_bdevs_operational": 4, 00:15:27.681 "process": { 00:15:27.681 "type": "rebuild", 00:15:27.681 "target": "spare", 00:15:27.681 "progress": { 00:15:27.681 "blocks": 19200, 00:15:27.681 "percent": 10 00:15:27.681 } 00:15:27.681 }, 00:15:27.681 "base_bdevs_list": [ 00:15:27.681 { 00:15:27.681 "name": "spare", 00:15:27.681 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:27.681 "is_configured": true, 00:15:27.681 "data_offset": 2048, 00:15:27.681 "data_size": 63488 00:15:27.681 }, 00:15:27.681 { 00:15:27.681 "name": "BaseBdev2", 00:15:27.681 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:27.681 "is_configured": true, 00:15:27.681 "data_offset": 2048, 00:15:27.681 "data_size": 63488 00:15:27.681 }, 00:15:27.681 { 00:15:27.681 "name": "BaseBdev3", 00:15:27.681 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:27.681 "is_configured": true, 00:15:27.681 "data_offset": 2048, 00:15:27.681 "data_size": 63488 00:15:27.681 }, 00:15:27.681 { 00:15:27.681 "name": "BaseBdev4", 00:15:27.681 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:27.681 "is_configured": true, 00:15:27.681 "data_offset": 2048, 00:15:27.681 "data_size": 63488 00:15:27.681 } 00:15:27.681 ] 00:15:27.681 }' 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.681 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:27.940 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=538 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.940 "name": "raid_bdev1", 00:15:27.940 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:27.940 "strip_size_kb": 64, 00:15:27.940 "state": "online", 00:15:27.940 "raid_level": "raid5f", 00:15:27.940 "superblock": true, 00:15:27.940 "num_base_bdevs": 4, 00:15:27.940 "num_base_bdevs_discovered": 4, 00:15:27.940 "num_base_bdevs_operational": 4, 00:15:27.940 "process": { 00:15:27.940 "type": "rebuild", 00:15:27.940 "target": "spare", 00:15:27.940 "progress": { 00:15:27.940 "blocks": 21120, 00:15:27.940 "percent": 11 00:15:27.940 } 00:15:27.940 }, 00:15:27.940 "base_bdevs_list": [ 00:15:27.940 { 00:15:27.940 "name": "spare", 00:15:27.940 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:27.940 "is_configured": true, 00:15:27.940 "data_offset": 2048, 00:15:27.940 "data_size": 63488 00:15:27.940 }, 00:15:27.940 { 00:15:27.940 "name": "BaseBdev2", 00:15:27.940 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:27.940 "is_configured": true, 00:15:27.940 "data_offset": 2048, 00:15:27.940 "data_size": 63488 00:15:27.940 }, 00:15:27.940 { 00:15:27.940 "name": "BaseBdev3", 00:15:27.940 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:27.940 "is_configured": true, 00:15:27.940 "data_offset": 2048, 00:15:27.940 "data_size": 63488 00:15:27.940 }, 00:15:27.940 { 00:15:27.940 "name": "BaseBdev4", 00:15:27.940 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:27.940 "is_configured": true, 00:15:27.940 "data_offset": 2048, 00:15:27.940 "data_size": 63488 00:15:27.940 } 00:15:27.940 ] 00:15:27.940 }' 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.940 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.941 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.941 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.941 13:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.915 "name": "raid_bdev1", 00:15:28.915 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:28.915 "strip_size_kb": 64, 00:15:28.915 "state": "online", 00:15:28.915 "raid_level": "raid5f", 00:15:28.915 "superblock": true, 00:15:28.915 "num_base_bdevs": 4, 00:15:28.915 "num_base_bdevs_discovered": 4, 00:15:28.915 "num_base_bdevs_operational": 4, 00:15:28.915 "process": { 00:15:28.915 "type": "rebuild", 00:15:28.915 "target": "spare", 00:15:28.915 "progress": { 00:15:28.915 "blocks": 42240, 00:15:28.915 "percent": 22 00:15:28.915 } 00:15:28.915 }, 00:15:28.915 "base_bdevs_list": [ 00:15:28.915 { 00:15:28.915 "name": "spare", 00:15:28.915 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:28.915 "is_configured": true, 00:15:28.915 "data_offset": 2048, 00:15:28.915 "data_size": 63488 00:15:28.915 }, 00:15:28.915 { 00:15:28.915 "name": "BaseBdev2", 00:15:28.915 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:28.915 "is_configured": true, 00:15:28.915 "data_offset": 2048, 00:15:28.915 "data_size": 63488 00:15:28.915 }, 00:15:28.915 { 00:15:28.915 "name": "BaseBdev3", 00:15:28.915 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:28.915 "is_configured": true, 00:15:28.915 "data_offset": 2048, 00:15:28.915 "data_size": 63488 00:15:28.915 }, 00:15:28.915 { 00:15:28.915 "name": "BaseBdev4", 00:15:28.915 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:28.915 "is_configured": true, 00:15:28.915 "data_offset": 2048, 00:15:28.915 "data_size": 63488 00:15:28.915 } 00:15:28.915 ] 00:15:28.915 }' 00:15:28.915 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.174 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.174 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.174 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.174 13:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.108 "name": "raid_bdev1", 00:15:30.108 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:30.108 "strip_size_kb": 64, 00:15:30.108 "state": "online", 00:15:30.108 "raid_level": "raid5f", 00:15:30.108 "superblock": true, 00:15:30.108 "num_base_bdevs": 4, 00:15:30.108 "num_base_bdevs_discovered": 4, 00:15:30.108 "num_base_bdevs_operational": 4, 00:15:30.108 "process": { 00:15:30.108 "type": "rebuild", 00:15:30.108 "target": "spare", 00:15:30.108 "progress": { 00:15:30.108 "blocks": 65280, 00:15:30.108 "percent": 34 00:15:30.108 } 00:15:30.108 }, 00:15:30.108 "base_bdevs_list": [ 00:15:30.108 { 00:15:30.108 "name": "spare", 00:15:30.108 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:30.108 "is_configured": true, 00:15:30.108 "data_offset": 2048, 00:15:30.108 "data_size": 63488 00:15:30.108 }, 00:15:30.108 { 00:15:30.108 "name": "BaseBdev2", 00:15:30.108 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:30.108 "is_configured": true, 00:15:30.108 "data_offset": 2048, 00:15:30.108 "data_size": 63488 00:15:30.108 }, 00:15:30.108 { 00:15:30.108 "name": "BaseBdev3", 00:15:30.108 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:30.108 "is_configured": true, 00:15:30.108 "data_offset": 2048, 00:15:30.108 "data_size": 63488 00:15:30.108 }, 00:15:30.108 { 00:15:30.108 "name": "BaseBdev4", 00:15:30.108 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:30.108 "is_configured": true, 00:15:30.108 "data_offset": 2048, 00:15:30.108 "data_size": 63488 00:15:30.108 } 00:15:30.108 ] 00:15:30.108 }' 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.108 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.367 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.367 13:29:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.302 "name": "raid_bdev1", 00:15:31.302 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:31.302 "strip_size_kb": 64, 00:15:31.302 "state": "online", 00:15:31.302 "raid_level": "raid5f", 00:15:31.302 "superblock": true, 00:15:31.302 "num_base_bdevs": 4, 00:15:31.302 "num_base_bdevs_discovered": 4, 00:15:31.302 "num_base_bdevs_operational": 4, 00:15:31.302 "process": { 00:15:31.302 "type": "rebuild", 00:15:31.302 "target": "spare", 00:15:31.302 "progress": { 00:15:31.302 "blocks": 86400, 00:15:31.302 "percent": 45 00:15:31.302 } 00:15:31.302 }, 00:15:31.302 "base_bdevs_list": [ 00:15:31.302 { 00:15:31.302 "name": "spare", 00:15:31.302 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:31.302 "is_configured": true, 00:15:31.302 "data_offset": 2048, 00:15:31.302 "data_size": 63488 00:15:31.302 }, 00:15:31.302 { 00:15:31.302 "name": "BaseBdev2", 00:15:31.302 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:31.302 "is_configured": true, 00:15:31.302 "data_offset": 2048, 00:15:31.302 "data_size": 63488 00:15:31.302 }, 00:15:31.302 { 00:15:31.302 "name": "BaseBdev3", 00:15:31.302 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:31.302 "is_configured": true, 00:15:31.302 "data_offset": 2048, 00:15:31.302 "data_size": 63488 00:15:31.302 }, 00:15:31.302 { 00:15:31.302 "name": "BaseBdev4", 00:15:31.302 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:31.302 "is_configured": true, 00:15:31.302 "data_offset": 2048, 00:15:31.302 "data_size": 63488 00:15:31.302 } 00:15:31.302 ] 00:15:31.302 }' 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.302 13:29:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.677 13:29:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.677 "name": "raid_bdev1", 00:15:32.677 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:32.677 "strip_size_kb": 64, 00:15:32.677 "state": "online", 00:15:32.677 "raid_level": "raid5f", 00:15:32.677 "superblock": true, 00:15:32.677 "num_base_bdevs": 4, 00:15:32.677 "num_base_bdevs_discovered": 4, 00:15:32.677 "num_base_bdevs_operational": 4, 00:15:32.677 "process": { 00:15:32.677 "type": "rebuild", 00:15:32.677 "target": "spare", 00:15:32.677 "progress": { 00:15:32.677 "blocks": 109440, 00:15:32.677 "percent": 57 00:15:32.677 } 00:15:32.677 }, 00:15:32.677 "base_bdevs_list": [ 00:15:32.677 { 00:15:32.677 "name": "spare", 00:15:32.677 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:32.677 "is_configured": true, 00:15:32.677 "data_offset": 2048, 00:15:32.677 "data_size": 63488 00:15:32.677 }, 00:15:32.677 { 00:15:32.677 "name": "BaseBdev2", 00:15:32.677 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:32.677 "is_configured": true, 00:15:32.677 "data_offset": 2048, 00:15:32.677 "data_size": 63488 00:15:32.677 }, 00:15:32.677 { 00:15:32.677 "name": "BaseBdev3", 00:15:32.677 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:32.677 "is_configured": true, 00:15:32.677 "data_offset": 2048, 00:15:32.677 "data_size": 63488 00:15:32.677 }, 00:15:32.677 { 00:15:32.677 "name": "BaseBdev4", 00:15:32.677 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:32.677 "is_configured": true, 00:15:32.677 "data_offset": 2048, 00:15:32.677 "data_size": 63488 00:15:32.677 } 00:15:32.677 ] 00:15:32.677 }' 00:15:32.677 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.677 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.677 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.677 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.677 13:29:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.675 "name": "raid_bdev1", 00:15:33.675 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:33.675 "strip_size_kb": 64, 00:15:33.675 "state": "online", 00:15:33.675 "raid_level": "raid5f", 00:15:33.675 "superblock": true, 00:15:33.675 "num_base_bdevs": 4, 00:15:33.675 "num_base_bdevs_discovered": 4, 00:15:33.675 "num_base_bdevs_operational": 4, 00:15:33.675 "process": { 00:15:33.675 "type": "rebuild", 00:15:33.675 "target": "spare", 00:15:33.675 "progress": { 00:15:33.675 "blocks": 130560, 00:15:33.675 "percent": 68 00:15:33.675 } 00:15:33.675 }, 00:15:33.675 "base_bdevs_list": [ 00:15:33.675 { 00:15:33.675 "name": "spare", 00:15:33.675 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:33.675 "is_configured": true, 00:15:33.675 "data_offset": 2048, 00:15:33.675 "data_size": 63488 00:15:33.675 }, 00:15:33.675 { 00:15:33.675 "name": "BaseBdev2", 00:15:33.675 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:33.675 "is_configured": true, 00:15:33.675 "data_offset": 2048, 00:15:33.675 "data_size": 63488 00:15:33.675 }, 00:15:33.675 { 00:15:33.675 "name": "BaseBdev3", 00:15:33.675 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:33.675 "is_configured": true, 00:15:33.675 "data_offset": 2048, 00:15:33.675 "data_size": 63488 00:15:33.675 }, 00:15:33.675 { 00:15:33.675 "name": "BaseBdev4", 00:15:33.675 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:33.675 "is_configured": true, 00:15:33.675 "data_offset": 2048, 00:15:33.675 "data_size": 63488 00:15:33.675 } 00:15:33.675 ] 00:15:33.675 }' 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.675 13:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.608 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.608 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.608 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.608 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.608 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.608 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.608 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.608 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.608 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.608 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.608 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.866 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.866 "name": "raid_bdev1", 00:15:34.866 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:34.866 "strip_size_kb": 64, 00:15:34.866 "state": "online", 00:15:34.866 "raid_level": "raid5f", 00:15:34.866 "superblock": true, 00:15:34.866 "num_base_bdevs": 4, 00:15:34.866 "num_base_bdevs_discovered": 4, 00:15:34.866 "num_base_bdevs_operational": 4, 00:15:34.866 "process": { 00:15:34.866 "type": "rebuild", 00:15:34.866 "target": "spare", 00:15:34.867 "progress": { 00:15:34.867 "blocks": 151680, 00:15:34.867 "percent": 79 00:15:34.867 } 00:15:34.867 }, 00:15:34.867 "base_bdevs_list": [ 00:15:34.867 { 00:15:34.867 "name": "spare", 00:15:34.867 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:34.867 "is_configured": true, 00:15:34.867 "data_offset": 2048, 00:15:34.867 "data_size": 63488 00:15:34.867 }, 00:15:34.867 { 00:15:34.867 "name": "BaseBdev2", 00:15:34.867 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:34.867 "is_configured": true, 00:15:34.867 "data_offset": 2048, 00:15:34.867 "data_size": 63488 00:15:34.867 }, 00:15:34.867 { 00:15:34.867 "name": "BaseBdev3", 00:15:34.867 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:34.867 "is_configured": true, 00:15:34.867 "data_offset": 2048, 00:15:34.867 "data_size": 63488 00:15:34.867 }, 00:15:34.867 { 00:15:34.867 "name": "BaseBdev4", 00:15:34.867 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:34.867 "is_configured": true, 00:15:34.867 "data_offset": 2048, 00:15:34.867 "data_size": 63488 00:15:34.867 } 00:15:34.867 ] 00:15:34.867 }' 00:15:34.867 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.867 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.867 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.867 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.867 13:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.803 "name": "raid_bdev1", 00:15:35.803 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:35.803 "strip_size_kb": 64, 00:15:35.803 "state": "online", 00:15:35.803 "raid_level": "raid5f", 00:15:35.803 "superblock": true, 00:15:35.803 "num_base_bdevs": 4, 00:15:35.803 "num_base_bdevs_discovered": 4, 00:15:35.803 "num_base_bdevs_operational": 4, 00:15:35.803 "process": { 00:15:35.803 "type": "rebuild", 00:15:35.803 "target": "spare", 00:15:35.803 "progress": { 00:15:35.803 "blocks": 174720, 00:15:35.803 "percent": 91 00:15:35.803 } 00:15:35.803 }, 00:15:35.803 "base_bdevs_list": [ 00:15:35.803 { 00:15:35.803 "name": "spare", 00:15:35.803 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:35.803 "is_configured": true, 00:15:35.803 "data_offset": 2048, 00:15:35.803 "data_size": 63488 00:15:35.803 }, 00:15:35.803 { 00:15:35.803 "name": "BaseBdev2", 00:15:35.803 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:35.803 "is_configured": true, 00:15:35.803 "data_offset": 2048, 00:15:35.803 "data_size": 63488 00:15:35.803 }, 00:15:35.803 { 00:15:35.803 "name": "BaseBdev3", 00:15:35.803 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:35.803 "is_configured": true, 00:15:35.803 "data_offset": 2048, 00:15:35.803 "data_size": 63488 00:15:35.803 }, 00:15:35.803 { 00:15:35.803 "name": "BaseBdev4", 00:15:35.803 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:35.803 "is_configured": true, 00:15:35.803 "data_offset": 2048, 00:15:35.803 "data_size": 63488 00:15:35.803 } 00:15:35.803 ] 00:15:35.803 }' 00:15:35.803 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.068 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.068 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.068 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.068 13:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.653 [2024-11-20 13:29:18.292762] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:36.653 [2024-11-20 13:29:18.292929] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:36.653 [2024-11-20 13:29:18.293172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.911 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.911 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.911 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.911 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.911 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.911 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.911 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.911 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.911 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.911 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.911 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.168 "name": "raid_bdev1", 00:15:37.168 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:37.168 "strip_size_kb": 64, 00:15:37.168 "state": "online", 00:15:37.168 "raid_level": "raid5f", 00:15:37.168 "superblock": true, 00:15:37.168 "num_base_bdevs": 4, 00:15:37.168 "num_base_bdevs_discovered": 4, 00:15:37.168 "num_base_bdevs_operational": 4, 00:15:37.168 "base_bdevs_list": [ 00:15:37.168 { 00:15:37.168 "name": "spare", 00:15:37.168 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:37.168 "is_configured": true, 00:15:37.168 "data_offset": 2048, 00:15:37.168 "data_size": 63488 00:15:37.168 }, 00:15:37.168 { 00:15:37.168 "name": "BaseBdev2", 00:15:37.168 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:37.168 "is_configured": true, 00:15:37.168 "data_offset": 2048, 00:15:37.168 "data_size": 63488 00:15:37.168 }, 00:15:37.168 { 00:15:37.168 "name": "BaseBdev3", 00:15:37.168 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:37.168 "is_configured": true, 00:15:37.168 "data_offset": 2048, 00:15:37.168 "data_size": 63488 00:15:37.168 }, 00:15:37.168 { 00:15:37.168 "name": "BaseBdev4", 00:15:37.168 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:37.168 "is_configured": true, 00:15:37.168 "data_offset": 2048, 00:15:37.168 "data_size": 63488 00:15:37.168 } 00:15:37.168 ] 00:15:37.168 }' 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.168 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.169 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.169 "name": "raid_bdev1", 00:15:37.169 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:37.169 "strip_size_kb": 64, 00:15:37.169 "state": "online", 00:15:37.169 "raid_level": "raid5f", 00:15:37.169 "superblock": true, 00:15:37.169 "num_base_bdevs": 4, 00:15:37.169 "num_base_bdevs_discovered": 4, 00:15:37.169 "num_base_bdevs_operational": 4, 00:15:37.169 "base_bdevs_list": [ 00:15:37.169 { 00:15:37.169 "name": "spare", 00:15:37.169 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:37.169 "is_configured": true, 00:15:37.169 "data_offset": 2048, 00:15:37.169 "data_size": 63488 00:15:37.169 }, 00:15:37.169 { 00:15:37.169 "name": "BaseBdev2", 00:15:37.169 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:37.169 "is_configured": true, 00:15:37.169 "data_offset": 2048, 00:15:37.169 "data_size": 63488 00:15:37.169 }, 00:15:37.169 { 00:15:37.169 "name": "BaseBdev3", 00:15:37.169 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:37.169 "is_configured": true, 00:15:37.169 "data_offset": 2048, 00:15:37.169 "data_size": 63488 00:15:37.169 }, 00:15:37.169 { 00:15:37.169 "name": "BaseBdev4", 00:15:37.169 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:37.169 "is_configured": true, 00:15:37.169 "data_offset": 2048, 00:15:37.169 "data_size": 63488 00:15:37.169 } 00:15:37.169 ] 00:15:37.169 }' 00:15:37.169 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.169 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.169 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.426 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.427 "name": "raid_bdev1", 00:15:37.427 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:37.427 "strip_size_kb": 64, 00:15:37.427 "state": "online", 00:15:37.427 "raid_level": "raid5f", 00:15:37.427 "superblock": true, 00:15:37.427 "num_base_bdevs": 4, 00:15:37.427 "num_base_bdevs_discovered": 4, 00:15:37.427 "num_base_bdevs_operational": 4, 00:15:37.427 "base_bdevs_list": [ 00:15:37.427 { 00:15:37.427 "name": "spare", 00:15:37.427 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:37.427 "is_configured": true, 00:15:37.427 "data_offset": 2048, 00:15:37.427 "data_size": 63488 00:15:37.427 }, 00:15:37.427 { 00:15:37.427 "name": "BaseBdev2", 00:15:37.427 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:37.427 "is_configured": true, 00:15:37.427 "data_offset": 2048, 00:15:37.427 "data_size": 63488 00:15:37.427 }, 00:15:37.427 { 00:15:37.427 "name": "BaseBdev3", 00:15:37.427 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:37.427 "is_configured": true, 00:15:37.427 "data_offset": 2048, 00:15:37.427 "data_size": 63488 00:15:37.427 }, 00:15:37.427 { 00:15:37.427 "name": "BaseBdev4", 00:15:37.427 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:37.427 "is_configured": true, 00:15:37.427 "data_offset": 2048, 00:15:37.427 "data_size": 63488 00:15:37.427 } 00:15:37.427 ] 00:15:37.427 }' 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.427 13:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.686 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.686 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.686 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.686 [2024-11-20 13:29:19.309651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.686 [2024-11-20 13:29:19.309750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.686 [2024-11-20 13:29:19.309860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.686 [2024-11-20 13:29:19.309966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.686 [2024-11-20 13:29:19.309980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:37.686 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.686 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.686 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.686 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.686 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:37.686 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.944 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:37.944 /dev/nbd0 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.202 1+0 records in 00:15:38.202 1+0 records out 00:15:38.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544645 s, 7.5 MB/s 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:38.202 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:38.202 /dev/nbd1 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.490 1+0 records in 00:15:38.490 1+0 records out 00:15:38.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434969 s, 9.4 MB/s 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.490 13:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.771 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.771 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.771 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.771 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.771 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.771 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.771 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:38.771 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.771 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.771 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.030 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.030 [2024-11-20 13:29:20.480412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:39.030 [2024-11-20 13:29:20.480487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.030 [2024-11-20 13:29:20.480510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:39.030 [2024-11-20 13:29:20.480520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.030 [2024-11-20 13:29:20.482836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.031 [2024-11-20 13:29:20.482882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:39.031 [2024-11-20 13:29:20.482979] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:39.031 [2024-11-20 13:29:20.483160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.031 [2024-11-20 13:29:20.483312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.031 [2024-11-20 13:29:20.483429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.031 [2024-11-20 13:29:20.483519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:39.031 spare 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.031 [2024-11-20 13:29:20.583459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:39.031 [2024-11-20 13:29:20.583528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:39.031 [2024-11-20 13:29:20.583895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:15:39.031 [2024-11-20 13:29:20.584459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:39.031 [2024-11-20 13:29:20.584483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:39.031 [2024-11-20 13:29:20.584692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.031 "name": "raid_bdev1", 00:15:39.031 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:39.031 "strip_size_kb": 64, 00:15:39.031 "state": "online", 00:15:39.031 "raid_level": "raid5f", 00:15:39.031 "superblock": true, 00:15:39.031 "num_base_bdevs": 4, 00:15:39.031 "num_base_bdevs_discovered": 4, 00:15:39.031 "num_base_bdevs_operational": 4, 00:15:39.031 "base_bdevs_list": [ 00:15:39.031 { 00:15:39.031 "name": "spare", 00:15:39.031 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:39.031 "is_configured": true, 00:15:39.031 "data_offset": 2048, 00:15:39.031 "data_size": 63488 00:15:39.031 }, 00:15:39.031 { 00:15:39.031 "name": "BaseBdev2", 00:15:39.031 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:39.031 "is_configured": true, 00:15:39.031 "data_offset": 2048, 00:15:39.031 "data_size": 63488 00:15:39.031 }, 00:15:39.031 { 00:15:39.031 "name": "BaseBdev3", 00:15:39.031 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:39.031 "is_configured": true, 00:15:39.031 "data_offset": 2048, 00:15:39.031 "data_size": 63488 00:15:39.031 }, 00:15:39.031 { 00:15:39.031 "name": "BaseBdev4", 00:15:39.031 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:39.031 "is_configured": true, 00:15:39.031 "data_offset": 2048, 00:15:39.031 "data_size": 63488 00:15:39.031 } 00:15:39.031 ] 00:15:39.031 }' 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.031 13:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.596 "name": "raid_bdev1", 00:15:39.596 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:39.596 "strip_size_kb": 64, 00:15:39.596 "state": "online", 00:15:39.596 "raid_level": "raid5f", 00:15:39.596 "superblock": true, 00:15:39.596 "num_base_bdevs": 4, 00:15:39.596 "num_base_bdevs_discovered": 4, 00:15:39.596 "num_base_bdevs_operational": 4, 00:15:39.596 "base_bdevs_list": [ 00:15:39.596 { 00:15:39.596 "name": "spare", 00:15:39.596 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:39.596 "is_configured": true, 00:15:39.596 "data_offset": 2048, 00:15:39.596 "data_size": 63488 00:15:39.596 }, 00:15:39.596 { 00:15:39.596 "name": "BaseBdev2", 00:15:39.596 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:39.596 "is_configured": true, 00:15:39.596 "data_offset": 2048, 00:15:39.596 "data_size": 63488 00:15:39.596 }, 00:15:39.596 { 00:15:39.596 "name": "BaseBdev3", 00:15:39.596 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:39.596 "is_configured": true, 00:15:39.596 "data_offset": 2048, 00:15:39.596 "data_size": 63488 00:15:39.596 }, 00:15:39.596 { 00:15:39.596 "name": "BaseBdev4", 00:15:39.596 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:39.596 "is_configured": true, 00:15:39.596 "data_offset": 2048, 00:15:39.596 "data_size": 63488 00:15:39.596 } 00:15:39.596 ] 00:15:39.596 }' 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.596 [2024-11-20 13:29:21.223647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.596 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.597 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.597 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.597 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.597 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.597 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.597 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.597 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.597 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.597 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.597 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.854 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.854 "name": "raid_bdev1", 00:15:39.854 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:39.854 "strip_size_kb": 64, 00:15:39.854 "state": "online", 00:15:39.854 "raid_level": "raid5f", 00:15:39.854 "superblock": true, 00:15:39.854 "num_base_bdevs": 4, 00:15:39.854 "num_base_bdevs_discovered": 3, 00:15:39.854 "num_base_bdevs_operational": 3, 00:15:39.854 "base_bdevs_list": [ 00:15:39.854 { 00:15:39.854 "name": null, 00:15:39.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.854 "is_configured": false, 00:15:39.854 "data_offset": 0, 00:15:39.854 "data_size": 63488 00:15:39.854 }, 00:15:39.854 { 00:15:39.854 "name": "BaseBdev2", 00:15:39.854 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:39.854 "is_configured": true, 00:15:39.854 "data_offset": 2048, 00:15:39.854 "data_size": 63488 00:15:39.854 }, 00:15:39.854 { 00:15:39.854 "name": "BaseBdev3", 00:15:39.854 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:39.854 "is_configured": true, 00:15:39.854 "data_offset": 2048, 00:15:39.854 "data_size": 63488 00:15:39.854 }, 00:15:39.854 { 00:15:39.854 "name": "BaseBdev4", 00:15:39.854 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:39.854 "is_configured": true, 00:15:39.854 "data_offset": 2048, 00:15:39.854 "data_size": 63488 00:15:39.854 } 00:15:39.854 ] 00:15:39.854 }' 00:15:39.854 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.854 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.113 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.113 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.113 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.113 [2024-11-20 13:29:21.671050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.113 [2024-11-20 13:29:21.671349] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:40.113 [2024-11-20 13:29:21.671446] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:40.113 [2024-11-20 13:29:21.671554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.113 [2024-11-20 13:29:21.675834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:15:40.113 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.113 13:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:40.113 [2024-11-20 13:29:21.678498] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:41.047 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.047 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.047 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.047 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.047 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.047 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.047 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.047 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.047 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.304 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.304 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.304 "name": "raid_bdev1", 00:15:41.304 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:41.304 "strip_size_kb": 64, 00:15:41.304 "state": "online", 00:15:41.304 "raid_level": "raid5f", 00:15:41.304 "superblock": true, 00:15:41.305 "num_base_bdevs": 4, 00:15:41.305 "num_base_bdevs_discovered": 4, 00:15:41.305 "num_base_bdevs_operational": 4, 00:15:41.305 "process": { 00:15:41.305 "type": "rebuild", 00:15:41.305 "target": "spare", 00:15:41.305 "progress": { 00:15:41.305 "blocks": 19200, 00:15:41.305 "percent": 10 00:15:41.305 } 00:15:41.305 }, 00:15:41.305 "base_bdevs_list": [ 00:15:41.305 { 00:15:41.305 "name": "spare", 00:15:41.305 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:41.305 "is_configured": true, 00:15:41.305 "data_offset": 2048, 00:15:41.305 "data_size": 63488 00:15:41.305 }, 00:15:41.305 { 00:15:41.305 "name": "BaseBdev2", 00:15:41.305 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:41.305 "is_configured": true, 00:15:41.305 "data_offset": 2048, 00:15:41.305 "data_size": 63488 00:15:41.305 }, 00:15:41.305 { 00:15:41.305 "name": "BaseBdev3", 00:15:41.305 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:41.305 "is_configured": true, 00:15:41.305 "data_offset": 2048, 00:15:41.305 "data_size": 63488 00:15:41.305 }, 00:15:41.305 { 00:15:41.305 "name": "BaseBdev4", 00:15:41.305 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:41.305 "is_configured": true, 00:15:41.305 "data_offset": 2048, 00:15:41.305 "data_size": 63488 00:15:41.305 } 00:15:41.305 ] 00:15:41.305 }' 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.305 [2024-11-20 13:29:22.846372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.305 [2024-11-20 13:29:22.887267] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:41.305 [2024-11-20 13:29:22.887354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.305 [2024-11-20 13:29:22.887396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.305 [2024-11-20 13:29:22.887405] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.305 "name": "raid_bdev1", 00:15:41.305 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:41.305 "strip_size_kb": 64, 00:15:41.305 "state": "online", 00:15:41.305 "raid_level": "raid5f", 00:15:41.305 "superblock": true, 00:15:41.305 "num_base_bdevs": 4, 00:15:41.305 "num_base_bdevs_discovered": 3, 00:15:41.305 "num_base_bdevs_operational": 3, 00:15:41.305 "base_bdevs_list": [ 00:15:41.305 { 00:15:41.305 "name": null, 00:15:41.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.305 "is_configured": false, 00:15:41.305 "data_offset": 0, 00:15:41.305 "data_size": 63488 00:15:41.305 }, 00:15:41.305 { 00:15:41.305 "name": "BaseBdev2", 00:15:41.305 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:41.305 "is_configured": true, 00:15:41.305 "data_offset": 2048, 00:15:41.305 "data_size": 63488 00:15:41.305 }, 00:15:41.305 { 00:15:41.305 "name": "BaseBdev3", 00:15:41.305 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:41.305 "is_configured": true, 00:15:41.305 "data_offset": 2048, 00:15:41.305 "data_size": 63488 00:15:41.305 }, 00:15:41.305 { 00:15:41.305 "name": "BaseBdev4", 00:15:41.305 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:41.305 "is_configured": true, 00:15:41.305 "data_offset": 2048, 00:15:41.305 "data_size": 63488 00:15:41.305 } 00:15:41.305 ] 00:15:41.305 }' 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.305 13:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.871 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:41.871 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.871 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.871 [2024-11-20 13:29:23.336371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:41.871 [2024-11-20 13:29:23.336506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.871 [2024-11-20 13:29:23.336560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:41.871 [2024-11-20 13:29:23.336596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.871 [2024-11-20 13:29:23.337134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.871 [2024-11-20 13:29:23.337205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:41.871 [2024-11-20 13:29:23.337346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:41.871 [2024-11-20 13:29:23.337393] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:41.871 [2024-11-20 13:29:23.337452] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:41.871 [2024-11-20 13:29:23.337510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.871 [2024-11-20 13:29:23.341728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:15:41.871 spare 00:15:41.871 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.871 13:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:41.871 [2024-11-20 13:29:23.344393] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.815 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.815 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.815 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.815 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.815 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.815 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.815 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.815 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.815 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.815 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.815 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.815 "name": "raid_bdev1", 00:15:42.815 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:42.815 "strip_size_kb": 64, 00:15:42.815 "state": "online", 00:15:42.815 "raid_level": "raid5f", 00:15:42.815 "superblock": true, 00:15:42.815 "num_base_bdevs": 4, 00:15:42.815 "num_base_bdevs_discovered": 4, 00:15:42.815 "num_base_bdevs_operational": 4, 00:15:42.815 "process": { 00:15:42.815 "type": "rebuild", 00:15:42.815 "target": "spare", 00:15:42.815 "progress": { 00:15:42.815 "blocks": 19200, 00:15:42.815 "percent": 10 00:15:42.815 } 00:15:42.815 }, 00:15:42.815 "base_bdevs_list": [ 00:15:42.815 { 00:15:42.815 "name": "spare", 00:15:42.815 "uuid": "5f260c75-9bf2-5348-93f5-5c0bf2a1866e", 00:15:42.815 "is_configured": true, 00:15:42.815 "data_offset": 2048, 00:15:42.815 "data_size": 63488 00:15:42.815 }, 00:15:42.815 { 00:15:42.815 "name": "BaseBdev2", 00:15:42.816 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:42.816 "is_configured": true, 00:15:42.816 "data_offset": 2048, 00:15:42.816 "data_size": 63488 00:15:42.816 }, 00:15:42.816 { 00:15:42.816 "name": "BaseBdev3", 00:15:42.816 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:42.816 "is_configured": true, 00:15:42.816 "data_offset": 2048, 00:15:42.816 "data_size": 63488 00:15:42.816 }, 00:15:42.816 { 00:15:42.816 "name": "BaseBdev4", 00:15:42.816 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:42.816 "is_configured": true, 00:15:42.816 "data_offset": 2048, 00:15:42.816 "data_size": 63488 00:15:42.816 } 00:15:42.816 ] 00:15:42.816 }' 00:15:42.816 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.816 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.816 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.075 [2024-11-20 13:29:24.520680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.075 [2024-11-20 13:29:24.553618] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:43.075 [2024-11-20 13:29:24.553712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.075 [2024-11-20 13:29:24.553732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.075 [2024-11-20 13:29:24.553743] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.075 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.075 "name": "raid_bdev1", 00:15:43.076 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:43.076 "strip_size_kb": 64, 00:15:43.076 "state": "online", 00:15:43.076 "raid_level": "raid5f", 00:15:43.076 "superblock": true, 00:15:43.076 "num_base_bdevs": 4, 00:15:43.076 "num_base_bdevs_discovered": 3, 00:15:43.076 "num_base_bdevs_operational": 3, 00:15:43.076 "base_bdevs_list": [ 00:15:43.076 { 00:15:43.076 "name": null, 00:15:43.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.076 "is_configured": false, 00:15:43.076 "data_offset": 0, 00:15:43.076 "data_size": 63488 00:15:43.076 }, 00:15:43.076 { 00:15:43.076 "name": "BaseBdev2", 00:15:43.076 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:43.076 "is_configured": true, 00:15:43.076 "data_offset": 2048, 00:15:43.076 "data_size": 63488 00:15:43.076 }, 00:15:43.076 { 00:15:43.076 "name": "BaseBdev3", 00:15:43.076 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:43.076 "is_configured": true, 00:15:43.076 "data_offset": 2048, 00:15:43.076 "data_size": 63488 00:15:43.076 }, 00:15:43.076 { 00:15:43.076 "name": "BaseBdev4", 00:15:43.076 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:43.076 "is_configured": true, 00:15:43.076 "data_offset": 2048, 00:15:43.076 "data_size": 63488 00:15:43.076 } 00:15:43.076 ] 00:15:43.076 }' 00:15:43.076 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.076 13:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.642 "name": "raid_bdev1", 00:15:43.642 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:43.642 "strip_size_kb": 64, 00:15:43.642 "state": "online", 00:15:43.642 "raid_level": "raid5f", 00:15:43.642 "superblock": true, 00:15:43.642 "num_base_bdevs": 4, 00:15:43.642 "num_base_bdevs_discovered": 3, 00:15:43.642 "num_base_bdevs_operational": 3, 00:15:43.642 "base_bdevs_list": [ 00:15:43.642 { 00:15:43.642 "name": null, 00:15:43.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.642 "is_configured": false, 00:15:43.642 "data_offset": 0, 00:15:43.642 "data_size": 63488 00:15:43.642 }, 00:15:43.642 { 00:15:43.642 "name": "BaseBdev2", 00:15:43.642 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:43.642 "is_configured": true, 00:15:43.642 "data_offset": 2048, 00:15:43.642 "data_size": 63488 00:15:43.642 }, 00:15:43.642 { 00:15:43.642 "name": "BaseBdev3", 00:15:43.642 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:43.642 "is_configured": true, 00:15:43.642 "data_offset": 2048, 00:15:43.642 "data_size": 63488 00:15:43.642 }, 00:15:43.642 { 00:15:43.642 "name": "BaseBdev4", 00:15:43.642 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:43.642 "is_configured": true, 00:15:43.642 "data_offset": 2048, 00:15:43.642 "data_size": 63488 00:15:43.642 } 00:15:43.642 ] 00:15:43.642 }' 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.642 [2024-11-20 13:29:25.206392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:43.642 [2024-11-20 13:29:25.206529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.642 [2024-11-20 13:29:25.206572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:43.642 [2024-11-20 13:29:25.206629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.642 [2024-11-20 13:29:25.207117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.642 [2024-11-20 13:29:25.207188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:43.642 [2024-11-20 13:29:25.207304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:43.642 [2024-11-20 13:29:25.207355] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:43.642 [2024-11-20 13:29:25.207403] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:43.642 [2024-11-20 13:29:25.207467] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:43.642 BaseBdev1 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.642 13:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.577 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.834 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.834 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.834 "name": "raid_bdev1", 00:15:44.834 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:44.834 "strip_size_kb": 64, 00:15:44.834 "state": "online", 00:15:44.834 "raid_level": "raid5f", 00:15:44.834 "superblock": true, 00:15:44.834 "num_base_bdevs": 4, 00:15:44.834 "num_base_bdevs_discovered": 3, 00:15:44.834 "num_base_bdevs_operational": 3, 00:15:44.834 "base_bdevs_list": [ 00:15:44.834 { 00:15:44.834 "name": null, 00:15:44.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.834 "is_configured": false, 00:15:44.834 "data_offset": 0, 00:15:44.834 "data_size": 63488 00:15:44.834 }, 00:15:44.834 { 00:15:44.834 "name": "BaseBdev2", 00:15:44.834 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:44.834 "is_configured": true, 00:15:44.834 "data_offset": 2048, 00:15:44.834 "data_size": 63488 00:15:44.834 }, 00:15:44.834 { 00:15:44.834 "name": "BaseBdev3", 00:15:44.834 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:44.834 "is_configured": true, 00:15:44.834 "data_offset": 2048, 00:15:44.834 "data_size": 63488 00:15:44.834 }, 00:15:44.834 { 00:15:44.834 "name": "BaseBdev4", 00:15:44.834 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:44.834 "is_configured": true, 00:15:44.834 "data_offset": 2048, 00:15:44.834 "data_size": 63488 00:15:44.834 } 00:15:44.834 ] 00:15:44.834 }' 00:15:44.834 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.834 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.094 "name": "raid_bdev1", 00:15:45.094 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:45.094 "strip_size_kb": 64, 00:15:45.094 "state": "online", 00:15:45.094 "raid_level": "raid5f", 00:15:45.094 "superblock": true, 00:15:45.094 "num_base_bdevs": 4, 00:15:45.094 "num_base_bdevs_discovered": 3, 00:15:45.094 "num_base_bdevs_operational": 3, 00:15:45.094 "base_bdevs_list": [ 00:15:45.094 { 00:15:45.094 "name": null, 00:15:45.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.094 "is_configured": false, 00:15:45.094 "data_offset": 0, 00:15:45.094 "data_size": 63488 00:15:45.094 }, 00:15:45.094 { 00:15:45.094 "name": "BaseBdev2", 00:15:45.094 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:45.094 "is_configured": true, 00:15:45.094 "data_offset": 2048, 00:15:45.094 "data_size": 63488 00:15:45.094 }, 00:15:45.094 { 00:15:45.094 "name": "BaseBdev3", 00:15:45.094 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:45.094 "is_configured": true, 00:15:45.094 "data_offset": 2048, 00:15:45.094 "data_size": 63488 00:15:45.094 }, 00:15:45.094 { 00:15:45.094 "name": "BaseBdev4", 00:15:45.094 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:45.094 "is_configured": true, 00:15:45.094 "data_offset": 2048, 00:15:45.094 "data_size": 63488 00:15:45.094 } 00:15:45.094 ] 00:15:45.094 }' 00:15:45.094 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.354 [2024-11-20 13:29:26.827674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.354 [2024-11-20 13:29:26.827849] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:45.354 [2024-11-20 13:29:26.827866] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:45.354 request: 00:15:45.354 { 00:15:45.354 "base_bdev": "BaseBdev1", 00:15:45.354 "raid_bdev": "raid_bdev1", 00:15:45.354 "method": "bdev_raid_add_base_bdev", 00:15:45.354 "req_id": 1 00:15:45.354 } 00:15:45.354 Got JSON-RPC error response 00:15:45.354 response: 00:15:45.354 { 00:15:45.354 "code": -22, 00:15:45.354 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:45.354 } 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:45.354 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:45.355 13:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:46.290 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:46.290 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.290 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.290 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.290 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.290 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.290 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.290 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.290 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.290 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.291 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.291 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.291 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.291 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.291 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.291 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.291 "name": "raid_bdev1", 00:15:46.291 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:46.291 "strip_size_kb": 64, 00:15:46.291 "state": "online", 00:15:46.291 "raid_level": "raid5f", 00:15:46.291 "superblock": true, 00:15:46.291 "num_base_bdevs": 4, 00:15:46.291 "num_base_bdevs_discovered": 3, 00:15:46.291 "num_base_bdevs_operational": 3, 00:15:46.291 "base_bdevs_list": [ 00:15:46.291 { 00:15:46.291 "name": null, 00:15:46.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.291 "is_configured": false, 00:15:46.291 "data_offset": 0, 00:15:46.291 "data_size": 63488 00:15:46.291 }, 00:15:46.291 { 00:15:46.291 "name": "BaseBdev2", 00:15:46.291 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:46.291 "is_configured": true, 00:15:46.291 "data_offset": 2048, 00:15:46.291 "data_size": 63488 00:15:46.291 }, 00:15:46.291 { 00:15:46.291 "name": "BaseBdev3", 00:15:46.291 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:46.291 "is_configured": true, 00:15:46.291 "data_offset": 2048, 00:15:46.291 "data_size": 63488 00:15:46.291 }, 00:15:46.291 { 00:15:46.291 "name": "BaseBdev4", 00:15:46.291 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:46.291 "is_configured": true, 00:15:46.291 "data_offset": 2048, 00:15:46.291 "data_size": 63488 00:15:46.291 } 00:15:46.291 ] 00:15:46.291 }' 00:15:46.291 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.291 13:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.860 "name": "raid_bdev1", 00:15:46.860 "uuid": "10513a08-9d53-4fa9-901b-7259c841671f", 00:15:46.860 "strip_size_kb": 64, 00:15:46.860 "state": "online", 00:15:46.860 "raid_level": "raid5f", 00:15:46.860 "superblock": true, 00:15:46.860 "num_base_bdevs": 4, 00:15:46.860 "num_base_bdevs_discovered": 3, 00:15:46.860 "num_base_bdevs_operational": 3, 00:15:46.860 "base_bdevs_list": [ 00:15:46.860 { 00:15:46.860 "name": null, 00:15:46.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.860 "is_configured": false, 00:15:46.860 "data_offset": 0, 00:15:46.860 "data_size": 63488 00:15:46.860 }, 00:15:46.860 { 00:15:46.860 "name": "BaseBdev2", 00:15:46.860 "uuid": "a65751e9-e637-55cf-bba5-491dfe16619a", 00:15:46.860 "is_configured": true, 00:15:46.860 "data_offset": 2048, 00:15:46.860 "data_size": 63488 00:15:46.860 }, 00:15:46.860 { 00:15:46.860 "name": "BaseBdev3", 00:15:46.860 "uuid": "e101d76f-1f3c-5db1-b10a-eb3eac81ddd5", 00:15:46.860 "is_configured": true, 00:15:46.860 "data_offset": 2048, 00:15:46.860 "data_size": 63488 00:15:46.860 }, 00:15:46.860 { 00:15:46.860 "name": "BaseBdev4", 00:15:46.860 "uuid": "d5acee66-8e3c-5bd9-8e22-efe38130de15", 00:15:46.860 "is_configured": true, 00:15:46.860 "data_offset": 2048, 00:15:46.860 "data_size": 63488 00:15:46.860 } 00:15:46.860 ] 00:15:46.860 }' 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95288 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 95288 ']' 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 95288 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95288 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95288' 00:15:46.860 killing process with pid 95288 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 95288 00:15:46.860 Received shutdown signal, test time was about 60.000000 seconds 00:15:46.860 00:15:46.860 Latency(us) 00:15:46.860 [2024-11-20T13:29:28.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.860 [2024-11-20T13:29:28.528Z] =================================================================================================================== 00:15:46.860 [2024-11-20T13:29:28.528Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:46.860 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 95288 00:15:46.860 [2024-11-20 13:29:28.463044] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.860 [2024-11-20 13:29:28.463205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.860 [2024-11-20 13:29:28.463317] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.860 [2024-11-20 13:29:28.463362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:46.860 [2024-11-20 13:29:28.514645] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.119 ************************************ 00:15:47.119 END TEST raid5f_rebuild_test_sb 00:15:47.119 ************************************ 00:15:47.119 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:47.119 00:15:47.119 real 0m25.562s 00:15:47.119 user 0m32.675s 00:15:47.119 sys 0m3.165s 00:15:47.119 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.119 13:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.119 13:29:28 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:47.119 13:29:28 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:47.119 13:29:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:47.119 13:29:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.119 13:29:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.119 ************************************ 00:15:47.119 START TEST raid_state_function_test_sb_4k 00:15:47.119 ************************************ 00:15:47.119 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:15:47.119 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:47.119 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:47.119 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:47.119 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:47.119 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:47.119 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:47.378 Process raid pid: 96082 00:15:47.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96082 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96082' 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96082 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 96082 ']' 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:47.378 13:29:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.378 [2024-11-20 13:29:28.873840] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:47.378 [2024-11-20 13:29:28.874477] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.378 [2024-11-20 13:29:29.031284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.636 [2024-11-20 13:29:29.062577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.636 [2024-11-20 13:29:29.107717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.636 [2024-11-20 13:29:29.107765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.262 [2024-11-20 13:29:29.778468] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.262 [2024-11-20 13:29:29.778529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.262 [2024-11-20 13:29:29.778547] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:48.262 [2024-11-20 13:29:29.778558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.262 "name": "Existed_Raid", 00:15:48.262 "uuid": "11e1f36f-6caf-40ac-aeca-6fcded6db86f", 00:15:48.262 "strip_size_kb": 0, 00:15:48.262 "state": "configuring", 00:15:48.262 "raid_level": "raid1", 00:15:48.262 "superblock": true, 00:15:48.262 "num_base_bdevs": 2, 00:15:48.262 "num_base_bdevs_discovered": 0, 00:15:48.262 "num_base_bdevs_operational": 2, 00:15:48.262 "base_bdevs_list": [ 00:15:48.262 { 00:15:48.262 "name": "BaseBdev1", 00:15:48.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.262 "is_configured": false, 00:15:48.262 "data_offset": 0, 00:15:48.262 "data_size": 0 00:15:48.262 }, 00:15:48.262 { 00:15:48.262 "name": "BaseBdev2", 00:15:48.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.262 "is_configured": false, 00:15:48.262 "data_offset": 0, 00:15:48.262 "data_size": 0 00:15:48.262 } 00:15:48.262 ] 00:15:48.262 }' 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.262 13:29:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.832 [2024-11-20 13:29:30.213641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.832 [2024-11-20 13:29:30.213740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.832 [2024-11-20 13:29:30.221605] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.832 [2024-11-20 13:29:30.221647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.832 [2024-11-20 13:29:30.221656] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:48.832 [2024-11-20 13:29:30.221691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.832 [2024-11-20 13:29:30.238515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.832 BaseBdev1 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.832 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.832 [ 00:15:48.832 { 00:15:48.832 "name": "BaseBdev1", 00:15:48.832 "aliases": [ 00:15:48.832 "65b77968-23df-4973-b90a-eb3dc50cbfba" 00:15:48.832 ], 00:15:48.832 "product_name": "Malloc disk", 00:15:48.832 "block_size": 4096, 00:15:48.832 "num_blocks": 8192, 00:15:48.832 "uuid": "65b77968-23df-4973-b90a-eb3dc50cbfba", 00:15:48.832 "assigned_rate_limits": { 00:15:48.832 "rw_ios_per_sec": 0, 00:15:48.832 "rw_mbytes_per_sec": 0, 00:15:48.832 "r_mbytes_per_sec": 0, 00:15:48.833 "w_mbytes_per_sec": 0 00:15:48.833 }, 00:15:48.833 "claimed": true, 00:15:48.833 "claim_type": "exclusive_write", 00:15:48.833 "zoned": false, 00:15:48.833 "supported_io_types": { 00:15:48.833 "read": true, 00:15:48.833 "write": true, 00:15:48.833 "unmap": true, 00:15:48.833 "flush": true, 00:15:48.833 "reset": true, 00:15:48.833 "nvme_admin": false, 00:15:48.833 "nvme_io": false, 00:15:48.833 "nvme_io_md": false, 00:15:48.833 "write_zeroes": true, 00:15:48.833 "zcopy": true, 00:15:48.833 "get_zone_info": false, 00:15:48.833 "zone_management": false, 00:15:48.833 "zone_append": false, 00:15:48.833 "compare": false, 00:15:48.833 "compare_and_write": false, 00:15:48.833 "abort": true, 00:15:48.833 "seek_hole": false, 00:15:48.833 "seek_data": false, 00:15:48.833 "copy": true, 00:15:48.833 "nvme_iov_md": false 00:15:48.833 }, 00:15:48.833 "memory_domains": [ 00:15:48.833 { 00:15:48.833 "dma_device_id": "system", 00:15:48.833 "dma_device_type": 1 00:15:48.833 }, 00:15:48.833 { 00:15:48.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.833 "dma_device_type": 2 00:15:48.833 } 00:15:48.833 ], 00:15:48.833 "driver_specific": {} 00:15:48.833 } 00:15:48.833 ] 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.833 "name": "Existed_Raid", 00:15:48.833 "uuid": "05b23439-863a-47e4-ab2c-1d931bcf2c58", 00:15:48.833 "strip_size_kb": 0, 00:15:48.833 "state": "configuring", 00:15:48.833 "raid_level": "raid1", 00:15:48.833 "superblock": true, 00:15:48.833 "num_base_bdevs": 2, 00:15:48.833 "num_base_bdevs_discovered": 1, 00:15:48.833 "num_base_bdevs_operational": 2, 00:15:48.833 "base_bdevs_list": [ 00:15:48.833 { 00:15:48.833 "name": "BaseBdev1", 00:15:48.833 "uuid": "65b77968-23df-4973-b90a-eb3dc50cbfba", 00:15:48.833 "is_configured": true, 00:15:48.833 "data_offset": 256, 00:15:48.833 "data_size": 7936 00:15:48.833 }, 00:15:48.833 { 00:15:48.833 "name": "BaseBdev2", 00:15:48.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.833 "is_configured": false, 00:15:48.833 "data_offset": 0, 00:15:48.833 "data_size": 0 00:15:48.833 } 00:15:48.833 ] 00:15:48.833 }' 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.833 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.092 [2024-11-20 13:29:30.645909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:49.092 [2024-11-20 13:29:30.646051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.092 [2024-11-20 13:29:30.657936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.092 [2024-11-20 13:29:30.660236] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.092 [2024-11-20 13:29:30.660326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.092 "name": "Existed_Raid", 00:15:49.092 "uuid": "8f215510-3fe2-4d66-843b-2abffe6cec19", 00:15:49.092 "strip_size_kb": 0, 00:15:49.092 "state": "configuring", 00:15:49.092 "raid_level": "raid1", 00:15:49.092 "superblock": true, 00:15:49.092 "num_base_bdevs": 2, 00:15:49.092 "num_base_bdevs_discovered": 1, 00:15:49.092 "num_base_bdevs_operational": 2, 00:15:49.092 "base_bdevs_list": [ 00:15:49.092 { 00:15:49.092 "name": "BaseBdev1", 00:15:49.092 "uuid": "65b77968-23df-4973-b90a-eb3dc50cbfba", 00:15:49.092 "is_configured": true, 00:15:49.092 "data_offset": 256, 00:15:49.092 "data_size": 7936 00:15:49.092 }, 00:15:49.092 { 00:15:49.092 "name": "BaseBdev2", 00:15:49.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.092 "is_configured": false, 00:15:49.092 "data_offset": 0, 00:15:49.092 "data_size": 0 00:15:49.092 } 00:15:49.092 ] 00:15:49.092 }' 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.092 13:29:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.662 [2024-11-20 13:29:31.156414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.662 [2024-11-20 13:29:31.156722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:49.662 [2024-11-20 13:29:31.156743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:49.662 [2024-11-20 13:29:31.157063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:49.662 BaseBdev2 00:15:49.662 [2024-11-20 13:29:31.157233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:49.662 [2024-11-20 13:29:31.157282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:49.662 [2024-11-20 13:29:31.157406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.662 [ 00:15:49.662 { 00:15:49.662 "name": "BaseBdev2", 00:15:49.662 "aliases": [ 00:15:49.662 "a71d5b73-93c6-451f-95ab-b9ce4640c0d1" 00:15:49.662 ], 00:15:49.662 "product_name": "Malloc disk", 00:15:49.662 "block_size": 4096, 00:15:49.662 "num_blocks": 8192, 00:15:49.662 "uuid": "a71d5b73-93c6-451f-95ab-b9ce4640c0d1", 00:15:49.662 "assigned_rate_limits": { 00:15:49.662 "rw_ios_per_sec": 0, 00:15:49.662 "rw_mbytes_per_sec": 0, 00:15:49.662 "r_mbytes_per_sec": 0, 00:15:49.662 "w_mbytes_per_sec": 0 00:15:49.662 }, 00:15:49.662 "claimed": true, 00:15:49.662 "claim_type": "exclusive_write", 00:15:49.662 "zoned": false, 00:15:49.662 "supported_io_types": { 00:15:49.662 "read": true, 00:15:49.662 "write": true, 00:15:49.662 "unmap": true, 00:15:49.662 "flush": true, 00:15:49.662 "reset": true, 00:15:49.662 "nvme_admin": false, 00:15:49.662 "nvme_io": false, 00:15:49.662 "nvme_io_md": false, 00:15:49.662 "write_zeroes": true, 00:15:49.662 "zcopy": true, 00:15:49.662 "get_zone_info": false, 00:15:49.662 "zone_management": false, 00:15:49.662 "zone_append": false, 00:15:49.662 "compare": false, 00:15:49.662 "compare_and_write": false, 00:15:49.662 "abort": true, 00:15:49.662 "seek_hole": false, 00:15:49.662 "seek_data": false, 00:15:49.662 "copy": true, 00:15:49.662 "nvme_iov_md": false 00:15:49.662 }, 00:15:49.662 "memory_domains": [ 00:15:49.662 { 00:15:49.662 "dma_device_id": "system", 00:15:49.662 "dma_device_type": 1 00:15:49.662 }, 00:15:49.662 { 00:15:49.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.662 "dma_device_type": 2 00:15:49.662 } 00:15:49.662 ], 00:15:49.662 "driver_specific": {} 00:15:49.662 } 00:15:49.662 ] 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.662 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.662 "name": "Existed_Raid", 00:15:49.662 "uuid": "8f215510-3fe2-4d66-843b-2abffe6cec19", 00:15:49.662 "strip_size_kb": 0, 00:15:49.662 "state": "online", 00:15:49.662 "raid_level": "raid1", 00:15:49.662 "superblock": true, 00:15:49.662 "num_base_bdevs": 2, 00:15:49.662 "num_base_bdevs_discovered": 2, 00:15:49.662 "num_base_bdevs_operational": 2, 00:15:49.662 "base_bdevs_list": [ 00:15:49.662 { 00:15:49.662 "name": "BaseBdev1", 00:15:49.662 "uuid": "65b77968-23df-4973-b90a-eb3dc50cbfba", 00:15:49.662 "is_configured": true, 00:15:49.662 "data_offset": 256, 00:15:49.662 "data_size": 7936 00:15:49.662 }, 00:15:49.663 { 00:15:49.663 "name": "BaseBdev2", 00:15:49.663 "uuid": "a71d5b73-93c6-451f-95ab-b9ce4640c0d1", 00:15:49.663 "is_configured": true, 00:15:49.663 "data_offset": 256, 00:15:49.663 "data_size": 7936 00:15:49.663 } 00:15:49.663 ] 00:15:49.663 }' 00:15:49.663 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.663 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.229 [2024-11-20 13:29:31.632050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.229 "name": "Existed_Raid", 00:15:50.229 "aliases": [ 00:15:50.229 "8f215510-3fe2-4d66-843b-2abffe6cec19" 00:15:50.229 ], 00:15:50.229 "product_name": "Raid Volume", 00:15:50.229 "block_size": 4096, 00:15:50.229 "num_blocks": 7936, 00:15:50.229 "uuid": "8f215510-3fe2-4d66-843b-2abffe6cec19", 00:15:50.229 "assigned_rate_limits": { 00:15:50.229 "rw_ios_per_sec": 0, 00:15:50.229 "rw_mbytes_per_sec": 0, 00:15:50.229 "r_mbytes_per_sec": 0, 00:15:50.229 "w_mbytes_per_sec": 0 00:15:50.229 }, 00:15:50.229 "claimed": false, 00:15:50.229 "zoned": false, 00:15:50.229 "supported_io_types": { 00:15:50.229 "read": true, 00:15:50.229 "write": true, 00:15:50.229 "unmap": false, 00:15:50.229 "flush": false, 00:15:50.229 "reset": true, 00:15:50.229 "nvme_admin": false, 00:15:50.229 "nvme_io": false, 00:15:50.229 "nvme_io_md": false, 00:15:50.229 "write_zeroes": true, 00:15:50.229 "zcopy": false, 00:15:50.229 "get_zone_info": false, 00:15:50.229 "zone_management": false, 00:15:50.229 "zone_append": false, 00:15:50.229 "compare": false, 00:15:50.229 "compare_and_write": false, 00:15:50.229 "abort": false, 00:15:50.229 "seek_hole": false, 00:15:50.229 "seek_data": false, 00:15:50.229 "copy": false, 00:15:50.229 "nvme_iov_md": false 00:15:50.229 }, 00:15:50.229 "memory_domains": [ 00:15:50.229 { 00:15:50.229 "dma_device_id": "system", 00:15:50.229 "dma_device_type": 1 00:15:50.229 }, 00:15:50.229 { 00:15:50.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.229 "dma_device_type": 2 00:15:50.229 }, 00:15:50.229 { 00:15:50.229 "dma_device_id": "system", 00:15:50.229 "dma_device_type": 1 00:15:50.229 }, 00:15:50.229 { 00:15:50.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.229 "dma_device_type": 2 00:15:50.229 } 00:15:50.229 ], 00:15:50.229 "driver_specific": { 00:15:50.229 "raid": { 00:15:50.229 "uuid": "8f215510-3fe2-4d66-843b-2abffe6cec19", 00:15:50.229 "strip_size_kb": 0, 00:15:50.229 "state": "online", 00:15:50.229 "raid_level": "raid1", 00:15:50.229 "superblock": true, 00:15:50.229 "num_base_bdevs": 2, 00:15:50.229 "num_base_bdevs_discovered": 2, 00:15:50.229 "num_base_bdevs_operational": 2, 00:15:50.229 "base_bdevs_list": [ 00:15:50.229 { 00:15:50.229 "name": "BaseBdev1", 00:15:50.229 "uuid": "65b77968-23df-4973-b90a-eb3dc50cbfba", 00:15:50.229 "is_configured": true, 00:15:50.229 "data_offset": 256, 00:15:50.229 "data_size": 7936 00:15:50.229 }, 00:15:50.229 { 00:15:50.229 "name": "BaseBdev2", 00:15:50.229 "uuid": "a71d5b73-93c6-451f-95ab-b9ce4640c0d1", 00:15:50.229 "is_configured": true, 00:15:50.229 "data_offset": 256, 00:15:50.229 "data_size": 7936 00:15:50.229 } 00:15:50.229 ] 00:15:50.229 } 00:15:50.229 } 00:15:50.229 }' 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:50.229 BaseBdev2' 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:50.229 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.230 [2024-11-20 13:29:31.843473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.230 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.488 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.488 "name": "Existed_Raid", 00:15:50.488 "uuid": "8f215510-3fe2-4d66-843b-2abffe6cec19", 00:15:50.488 "strip_size_kb": 0, 00:15:50.488 "state": "online", 00:15:50.488 "raid_level": "raid1", 00:15:50.488 "superblock": true, 00:15:50.488 "num_base_bdevs": 2, 00:15:50.488 "num_base_bdevs_discovered": 1, 00:15:50.488 "num_base_bdevs_operational": 1, 00:15:50.488 "base_bdevs_list": [ 00:15:50.488 { 00:15:50.488 "name": null, 00:15:50.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.488 "is_configured": false, 00:15:50.488 "data_offset": 0, 00:15:50.488 "data_size": 7936 00:15:50.488 }, 00:15:50.488 { 00:15:50.488 "name": "BaseBdev2", 00:15:50.488 "uuid": "a71d5b73-93c6-451f-95ab-b9ce4640c0d1", 00:15:50.488 "is_configured": true, 00:15:50.488 "data_offset": 256, 00:15:50.488 "data_size": 7936 00:15:50.488 } 00:15:50.488 ] 00:15:50.488 }' 00:15:50.488 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.488 13:29:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.745 [2024-11-20 13:29:32.334608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:50.745 [2024-11-20 13:29:32.334729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.745 [2024-11-20 13:29:32.346551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.745 [2024-11-20 13:29:32.346601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.745 [2024-11-20 13:29:32.346614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96082 00:15:50.745 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 96082 ']' 00:15:50.746 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 96082 00:15:50.746 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:15:50.746 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.746 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96082 00:15:51.003 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.003 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.003 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96082' 00:15:51.003 killing process with pid 96082 00:15:51.003 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 96082 00:15:51.003 [2024-11-20 13:29:32.442162] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.003 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 96082 00:15:51.003 [2024-11-20 13:29:32.443221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:51.003 ************************************ 00:15:51.003 END TEST raid_state_function_test_sb_4k 00:15:51.003 ************************************ 00:15:51.003 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:51.003 00:15:51.003 real 0m3.878s 00:15:51.003 user 0m6.187s 00:15:51.003 sys 0m0.763s 00:15:51.003 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.003 13:29:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.261 13:29:32 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:51.261 13:29:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:51.261 13:29:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.261 13:29:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:51.261 ************************************ 00:15:51.261 START TEST raid_superblock_test_4k 00:15:51.261 ************************************ 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:51.261 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96323 00:15:51.262 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:51.262 13:29:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96323 00:15:51.262 13:29:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 96323 ']' 00:15:51.262 13:29:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.262 13:29:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.262 13:29:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.262 13:29:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.262 13:29:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.262 [2024-11-20 13:29:32.819831] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:51.262 [2024-11-20 13:29:32.820085] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96323 ] 00:15:51.519 [2024-11-20 13:29:32.974758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.519 [2024-11-20 13:29:33.004314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.520 [2024-11-20 13:29:33.047217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.520 [2024-11-20 13:29:33.047340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.086 malloc1 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.086 [2024-11-20 13:29:33.734468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:52.086 [2024-11-20 13:29:33.734570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.086 [2024-11-20 13:29:33.734603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:52.086 [2024-11-20 13:29:33.734625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.086 [2024-11-20 13:29:33.736962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.086 [2024-11-20 13:29:33.737014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:52.086 pt1 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.086 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.344 malloc2 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.344 [2024-11-20 13:29:33.763117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:52.344 [2024-11-20 13:29:33.763236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.344 [2024-11-20 13:29:33.763272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:52.344 [2024-11-20 13:29:33.763303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.344 [2024-11-20 13:29:33.765601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.344 [2024-11-20 13:29:33.765681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:52.344 pt2 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.344 [2024-11-20 13:29:33.775164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:52.344 [2024-11-20 13:29:33.777307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.344 [2024-11-20 13:29:33.777529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:52.344 [2024-11-20 13:29:33.777584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:52.344 [2024-11-20 13:29:33.777931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:52.344 [2024-11-20 13:29:33.778152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:52.344 [2024-11-20 13:29:33.778203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:52.344 [2024-11-20 13:29:33.778413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.344 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.344 "name": "raid_bdev1", 00:15:52.344 "uuid": "e2d577e9-cc61-44d3-a11c-e39e20ee469c", 00:15:52.344 "strip_size_kb": 0, 00:15:52.344 "state": "online", 00:15:52.344 "raid_level": "raid1", 00:15:52.344 "superblock": true, 00:15:52.344 "num_base_bdevs": 2, 00:15:52.344 "num_base_bdevs_discovered": 2, 00:15:52.344 "num_base_bdevs_operational": 2, 00:15:52.344 "base_bdevs_list": [ 00:15:52.344 { 00:15:52.344 "name": "pt1", 00:15:52.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.344 "is_configured": true, 00:15:52.345 "data_offset": 256, 00:15:52.345 "data_size": 7936 00:15:52.345 }, 00:15:52.345 { 00:15:52.345 "name": "pt2", 00:15:52.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.345 "is_configured": true, 00:15:52.345 "data_offset": 256, 00:15:52.345 "data_size": 7936 00:15:52.345 } 00:15:52.345 ] 00:15:52.345 }' 00:15:52.345 13:29:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.345 13:29:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.602 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:52.602 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:52.602 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.602 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.602 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.602 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.602 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.602 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.602 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.602 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.602 [2024-11-20 13:29:34.266670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.861 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.861 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.861 "name": "raid_bdev1", 00:15:52.861 "aliases": [ 00:15:52.861 "e2d577e9-cc61-44d3-a11c-e39e20ee469c" 00:15:52.861 ], 00:15:52.861 "product_name": "Raid Volume", 00:15:52.861 "block_size": 4096, 00:15:52.861 "num_blocks": 7936, 00:15:52.861 "uuid": "e2d577e9-cc61-44d3-a11c-e39e20ee469c", 00:15:52.861 "assigned_rate_limits": { 00:15:52.861 "rw_ios_per_sec": 0, 00:15:52.861 "rw_mbytes_per_sec": 0, 00:15:52.861 "r_mbytes_per_sec": 0, 00:15:52.861 "w_mbytes_per_sec": 0 00:15:52.861 }, 00:15:52.861 "claimed": false, 00:15:52.861 "zoned": false, 00:15:52.861 "supported_io_types": { 00:15:52.861 "read": true, 00:15:52.861 "write": true, 00:15:52.861 "unmap": false, 00:15:52.861 "flush": false, 00:15:52.861 "reset": true, 00:15:52.861 "nvme_admin": false, 00:15:52.861 "nvme_io": false, 00:15:52.861 "nvme_io_md": false, 00:15:52.861 "write_zeroes": true, 00:15:52.861 "zcopy": false, 00:15:52.861 "get_zone_info": false, 00:15:52.861 "zone_management": false, 00:15:52.861 "zone_append": false, 00:15:52.861 "compare": false, 00:15:52.861 "compare_and_write": false, 00:15:52.861 "abort": false, 00:15:52.861 "seek_hole": false, 00:15:52.861 "seek_data": false, 00:15:52.861 "copy": false, 00:15:52.861 "nvme_iov_md": false 00:15:52.861 }, 00:15:52.861 "memory_domains": [ 00:15:52.861 { 00:15:52.861 "dma_device_id": "system", 00:15:52.861 "dma_device_type": 1 00:15:52.861 }, 00:15:52.861 { 00:15:52.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.861 "dma_device_type": 2 00:15:52.861 }, 00:15:52.861 { 00:15:52.861 "dma_device_id": "system", 00:15:52.861 "dma_device_type": 1 00:15:52.861 }, 00:15:52.861 { 00:15:52.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.861 "dma_device_type": 2 00:15:52.861 } 00:15:52.861 ], 00:15:52.861 "driver_specific": { 00:15:52.861 "raid": { 00:15:52.861 "uuid": "e2d577e9-cc61-44d3-a11c-e39e20ee469c", 00:15:52.861 "strip_size_kb": 0, 00:15:52.861 "state": "online", 00:15:52.861 "raid_level": "raid1", 00:15:52.861 "superblock": true, 00:15:52.861 "num_base_bdevs": 2, 00:15:52.861 "num_base_bdevs_discovered": 2, 00:15:52.861 "num_base_bdevs_operational": 2, 00:15:52.861 "base_bdevs_list": [ 00:15:52.861 { 00:15:52.861 "name": "pt1", 00:15:52.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.861 "is_configured": true, 00:15:52.861 "data_offset": 256, 00:15:52.861 "data_size": 7936 00:15:52.861 }, 00:15:52.861 { 00:15:52.861 "name": "pt2", 00:15:52.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.861 "is_configured": true, 00:15:52.861 "data_offset": 256, 00:15:52.861 "data_size": 7936 00:15:52.861 } 00:15:52.861 ] 00:15:52.861 } 00:15:52.861 } 00:15:52.861 }' 00:15:52.861 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.861 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:52.861 pt2' 00:15:52.861 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.862 [2024-11-20 13:29:34.506209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.862 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e2d577e9-cc61-44d3-a11c-e39e20ee469c 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z e2d577e9-cc61-44d3-a11c-e39e20ee469c ']' 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.120 [2024-11-20 13:29:34.541839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.120 [2024-11-20 13:29:34.541958] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.120 [2024-11-20 13:29:34.542069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.120 [2024-11-20 13:29:34.542141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.120 [2024-11-20 13:29:34.542151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:53.120 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.121 [2024-11-20 13:29:34.689584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:53.121 [2024-11-20 13:29:34.691856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:53.121 [2024-11-20 13:29:34.691931] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:53.121 [2024-11-20 13:29:34.691986] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:53.121 [2024-11-20 13:29:34.692018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.121 [2024-11-20 13:29:34.692029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:53.121 request: 00:15:53.121 { 00:15:53.121 "name": "raid_bdev1", 00:15:53.121 "raid_level": "raid1", 00:15:53.121 "base_bdevs": [ 00:15:53.121 "malloc1", 00:15:53.121 "malloc2" 00:15:53.121 ], 00:15:53.121 "superblock": false, 00:15:53.121 "method": "bdev_raid_create", 00:15:53.121 "req_id": 1 00:15:53.121 } 00:15:53.121 Got JSON-RPC error response 00:15:53.121 response: 00:15:53.121 { 00:15:53.121 "code": -17, 00:15:53.121 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:53.121 } 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.121 [2024-11-20 13:29:34.741479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:53.121 [2024-11-20 13:29:34.741551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.121 [2024-11-20 13:29:34.741573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:53.121 [2024-11-20 13:29:34.741584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.121 [2024-11-20 13:29:34.744064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.121 [2024-11-20 13:29:34.744104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:53.121 [2024-11-20 13:29:34.744195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:53.121 [2024-11-20 13:29:34.744240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:53.121 pt1 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.121 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.379 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.379 "name": "raid_bdev1", 00:15:53.379 "uuid": "e2d577e9-cc61-44d3-a11c-e39e20ee469c", 00:15:53.379 "strip_size_kb": 0, 00:15:53.379 "state": "configuring", 00:15:53.380 "raid_level": "raid1", 00:15:53.380 "superblock": true, 00:15:53.380 "num_base_bdevs": 2, 00:15:53.380 "num_base_bdevs_discovered": 1, 00:15:53.380 "num_base_bdevs_operational": 2, 00:15:53.380 "base_bdevs_list": [ 00:15:53.380 { 00:15:53.380 "name": "pt1", 00:15:53.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:53.380 "is_configured": true, 00:15:53.380 "data_offset": 256, 00:15:53.380 "data_size": 7936 00:15:53.380 }, 00:15:53.380 { 00:15:53.380 "name": null, 00:15:53.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.380 "is_configured": false, 00:15:53.380 "data_offset": 256, 00:15:53.380 "data_size": 7936 00:15:53.380 } 00:15:53.380 ] 00:15:53.380 }' 00:15:53.380 13:29:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.380 13:29:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.639 [2024-11-20 13:29:35.204726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:53.639 [2024-11-20 13:29:35.204869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.639 [2024-11-20 13:29:35.204924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:53.639 [2024-11-20 13:29:35.204961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.639 [2024-11-20 13:29:35.205472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.639 [2024-11-20 13:29:35.205536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:53.639 [2024-11-20 13:29:35.205662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:53.639 [2024-11-20 13:29:35.205719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:53.639 [2024-11-20 13:29:35.205867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:53.639 [2024-11-20 13:29:35.205907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:53.639 [2024-11-20 13:29:35.206210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:53.639 [2024-11-20 13:29:35.206385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:53.639 [2024-11-20 13:29:35.206437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:53.639 [2024-11-20 13:29:35.206597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.639 pt2 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.639 "name": "raid_bdev1", 00:15:53.639 "uuid": "e2d577e9-cc61-44d3-a11c-e39e20ee469c", 00:15:53.639 "strip_size_kb": 0, 00:15:53.639 "state": "online", 00:15:53.639 "raid_level": "raid1", 00:15:53.639 "superblock": true, 00:15:53.639 "num_base_bdevs": 2, 00:15:53.639 "num_base_bdevs_discovered": 2, 00:15:53.639 "num_base_bdevs_operational": 2, 00:15:53.639 "base_bdevs_list": [ 00:15:53.639 { 00:15:53.639 "name": "pt1", 00:15:53.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:53.639 "is_configured": true, 00:15:53.639 "data_offset": 256, 00:15:53.639 "data_size": 7936 00:15:53.639 }, 00:15:53.639 { 00:15:53.639 "name": "pt2", 00:15:53.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.639 "is_configured": true, 00:15:53.639 "data_offset": 256, 00:15:53.639 "data_size": 7936 00:15:53.639 } 00:15:53.639 ] 00:15:53.639 }' 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.639 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.207 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.208 [2024-11-20 13:29:35.684192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.208 "name": "raid_bdev1", 00:15:54.208 "aliases": [ 00:15:54.208 "e2d577e9-cc61-44d3-a11c-e39e20ee469c" 00:15:54.208 ], 00:15:54.208 "product_name": "Raid Volume", 00:15:54.208 "block_size": 4096, 00:15:54.208 "num_blocks": 7936, 00:15:54.208 "uuid": "e2d577e9-cc61-44d3-a11c-e39e20ee469c", 00:15:54.208 "assigned_rate_limits": { 00:15:54.208 "rw_ios_per_sec": 0, 00:15:54.208 "rw_mbytes_per_sec": 0, 00:15:54.208 "r_mbytes_per_sec": 0, 00:15:54.208 "w_mbytes_per_sec": 0 00:15:54.208 }, 00:15:54.208 "claimed": false, 00:15:54.208 "zoned": false, 00:15:54.208 "supported_io_types": { 00:15:54.208 "read": true, 00:15:54.208 "write": true, 00:15:54.208 "unmap": false, 00:15:54.208 "flush": false, 00:15:54.208 "reset": true, 00:15:54.208 "nvme_admin": false, 00:15:54.208 "nvme_io": false, 00:15:54.208 "nvme_io_md": false, 00:15:54.208 "write_zeroes": true, 00:15:54.208 "zcopy": false, 00:15:54.208 "get_zone_info": false, 00:15:54.208 "zone_management": false, 00:15:54.208 "zone_append": false, 00:15:54.208 "compare": false, 00:15:54.208 "compare_and_write": false, 00:15:54.208 "abort": false, 00:15:54.208 "seek_hole": false, 00:15:54.208 "seek_data": false, 00:15:54.208 "copy": false, 00:15:54.208 "nvme_iov_md": false 00:15:54.208 }, 00:15:54.208 "memory_domains": [ 00:15:54.208 { 00:15:54.208 "dma_device_id": "system", 00:15:54.208 "dma_device_type": 1 00:15:54.208 }, 00:15:54.208 { 00:15:54.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.208 "dma_device_type": 2 00:15:54.208 }, 00:15:54.208 { 00:15:54.208 "dma_device_id": "system", 00:15:54.208 "dma_device_type": 1 00:15:54.208 }, 00:15:54.208 { 00:15:54.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.208 "dma_device_type": 2 00:15:54.208 } 00:15:54.208 ], 00:15:54.208 "driver_specific": { 00:15:54.208 "raid": { 00:15:54.208 "uuid": "e2d577e9-cc61-44d3-a11c-e39e20ee469c", 00:15:54.208 "strip_size_kb": 0, 00:15:54.208 "state": "online", 00:15:54.208 "raid_level": "raid1", 00:15:54.208 "superblock": true, 00:15:54.208 "num_base_bdevs": 2, 00:15:54.208 "num_base_bdevs_discovered": 2, 00:15:54.208 "num_base_bdevs_operational": 2, 00:15:54.208 "base_bdevs_list": [ 00:15:54.208 { 00:15:54.208 "name": "pt1", 00:15:54.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.208 "is_configured": true, 00:15:54.208 "data_offset": 256, 00:15:54.208 "data_size": 7936 00:15:54.208 }, 00:15:54.208 { 00:15:54.208 "name": "pt2", 00:15:54.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.208 "is_configured": true, 00:15:54.208 "data_offset": 256, 00:15:54.208 "data_size": 7936 00:15:54.208 } 00:15:54.208 ] 00:15:54.208 } 00:15:54.208 } 00:15:54.208 }' 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:54.208 pt2' 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.208 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.467 [2024-11-20 13:29:35.907889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' e2d577e9-cc61-44d3-a11c-e39e20ee469c '!=' e2d577e9-cc61-44d3-a11c-e39e20ee469c ']' 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.467 [2024-11-20 13:29:35.955601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.467 13:29:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.467 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.467 "name": "raid_bdev1", 00:15:54.467 "uuid": "e2d577e9-cc61-44d3-a11c-e39e20ee469c", 00:15:54.467 "strip_size_kb": 0, 00:15:54.467 "state": "online", 00:15:54.467 "raid_level": "raid1", 00:15:54.467 "superblock": true, 00:15:54.467 "num_base_bdevs": 2, 00:15:54.467 "num_base_bdevs_discovered": 1, 00:15:54.467 "num_base_bdevs_operational": 1, 00:15:54.467 "base_bdevs_list": [ 00:15:54.467 { 00:15:54.467 "name": null, 00:15:54.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.467 "is_configured": false, 00:15:54.467 "data_offset": 0, 00:15:54.467 "data_size": 7936 00:15:54.467 }, 00:15:54.467 { 00:15:54.467 "name": "pt2", 00:15:54.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.467 "is_configured": true, 00:15:54.467 "data_offset": 256, 00:15:54.467 "data_size": 7936 00:15:54.467 } 00:15:54.467 ] 00:15:54.467 }' 00:15:54.467 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.467 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.033 [2024-11-20 13:29:36.402740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.033 [2024-11-20 13:29:36.402849] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.033 [2024-11-20 13:29:36.402994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.033 [2024-11-20 13:29:36.403127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.033 [2024-11-20 13:29:36.403176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.033 [2024-11-20 13:29:36.458608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.033 [2024-11-20 13:29:36.458676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.033 [2024-11-20 13:29:36.458699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:55.033 [2024-11-20 13:29:36.458708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.033 [2024-11-20 13:29:36.461062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.033 [2024-11-20 13:29:36.461097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.033 [2024-11-20 13:29:36.461196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:55.033 [2024-11-20 13:29:36.461229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.033 [2024-11-20 13:29:36.461313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:55.033 [2024-11-20 13:29:36.461322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:55.033 [2024-11-20 13:29:36.461595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:55.033 [2024-11-20 13:29:36.461722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:55.033 [2024-11-20 13:29:36.461733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:55.033 [2024-11-20 13:29:36.461837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.033 pt2 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.033 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.033 "name": "raid_bdev1", 00:15:55.033 "uuid": "e2d577e9-cc61-44d3-a11c-e39e20ee469c", 00:15:55.033 "strip_size_kb": 0, 00:15:55.033 "state": "online", 00:15:55.033 "raid_level": "raid1", 00:15:55.033 "superblock": true, 00:15:55.033 "num_base_bdevs": 2, 00:15:55.033 "num_base_bdevs_discovered": 1, 00:15:55.033 "num_base_bdevs_operational": 1, 00:15:55.033 "base_bdevs_list": [ 00:15:55.033 { 00:15:55.033 "name": null, 00:15:55.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.033 "is_configured": false, 00:15:55.033 "data_offset": 256, 00:15:55.033 "data_size": 7936 00:15:55.033 }, 00:15:55.033 { 00:15:55.033 "name": "pt2", 00:15:55.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.033 "is_configured": true, 00:15:55.033 "data_offset": 256, 00:15:55.033 "data_size": 7936 00:15:55.033 } 00:15:55.033 ] 00:15:55.034 }' 00:15:55.034 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.034 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.291 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.291 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.291 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.291 [2024-11-20 13:29:36.945831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.291 [2024-11-20 13:29:36.945915] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.291 [2024-11-20 13:29:36.946061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.291 [2024-11-20 13:29:36.946150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.291 [2024-11-20 13:29:36.946210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:55.291 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.291 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:55.291 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.291 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.291 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.548 13:29:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.548 13:29:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.548 [2024-11-20 13:29:37.005739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.548 [2024-11-20 13:29:37.005826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.548 [2024-11-20 13:29:37.005846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:55.548 [2024-11-20 13:29:37.005861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.548 [2024-11-20 13:29:37.008446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.548 [2024-11-20 13:29:37.008547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.548 [2024-11-20 13:29:37.008644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:55.548 [2024-11-20 13:29:37.008694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.548 [2024-11-20 13:29:37.008814] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:55.548 [2024-11-20 13:29:37.008829] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.548 [2024-11-20 13:29:37.008857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:55.548 [2024-11-20 13:29:37.008894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.548 [2024-11-20 13:29:37.008971] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:55.548 [2024-11-20 13:29:37.008984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:55.548 [2024-11-20 13:29:37.009287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:55.548 [2024-11-20 13:29:37.009426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:55.548 [2024-11-20 13:29:37.009437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:55.548 [2024-11-20 13:29:37.009566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.548 pt1 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.548 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.549 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.549 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.549 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.549 "name": "raid_bdev1", 00:15:55.549 "uuid": "e2d577e9-cc61-44d3-a11c-e39e20ee469c", 00:15:55.549 "strip_size_kb": 0, 00:15:55.549 "state": "online", 00:15:55.549 "raid_level": "raid1", 00:15:55.549 "superblock": true, 00:15:55.549 "num_base_bdevs": 2, 00:15:55.549 "num_base_bdevs_discovered": 1, 00:15:55.549 "num_base_bdevs_operational": 1, 00:15:55.549 "base_bdevs_list": [ 00:15:55.549 { 00:15:55.549 "name": null, 00:15:55.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.549 "is_configured": false, 00:15:55.549 "data_offset": 256, 00:15:55.549 "data_size": 7936 00:15:55.549 }, 00:15:55.549 { 00:15:55.549 "name": "pt2", 00:15:55.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.549 "is_configured": true, 00:15:55.549 "data_offset": 256, 00:15:55.549 "data_size": 7936 00:15:55.549 } 00:15:55.549 ] 00:15:55.549 }' 00:15:55.549 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.549 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.806 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:55.806 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:55.806 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.806 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.104 [2024-11-20 13:29:37.517148] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' e2d577e9-cc61-44d3-a11c-e39e20ee469c '!=' e2d577e9-cc61-44d3-a11c-e39e20ee469c ']' 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96323 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 96323 ']' 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 96323 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96323 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.104 killing process with pid 96323 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96323' 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 96323 00:15:56.104 [2024-11-20 13:29:37.598951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.104 [2024-11-20 13:29:37.599063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.104 [2024-11-20 13:29:37.599121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.104 [2024-11-20 13:29:37.599131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:56.104 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 96323 00:15:56.104 [2024-11-20 13:29:37.622696] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.374 13:29:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:56.374 00:15:56.374 real 0m5.106s 00:15:56.374 user 0m8.446s 00:15:56.374 sys 0m1.046s 00:15:56.374 ************************************ 00:15:56.374 END TEST raid_superblock_test_4k 00:15:56.374 ************************************ 00:15:56.374 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.374 13:29:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.374 13:29:37 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:56.374 13:29:37 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:56.374 13:29:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:56.374 13:29:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.374 13:29:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.374 ************************************ 00:15:56.374 START TEST raid_rebuild_test_sb_4k 00:15:56.374 ************************************ 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96635 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96635 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 96635 ']' 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.374 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.375 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.375 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.375 13:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.375 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:56.375 Zero copy mechanism will not be used. 00:15:56.375 [2024-11-20 13:29:38.005803] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:56.375 [2024-11-20 13:29:38.005938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96635 ] 00:15:56.633 [2024-11-20 13:29:38.164651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.633 [2024-11-20 13:29:38.193070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.633 [2024-11-20 13:29:38.236869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.633 [2024-11-20 13:29:38.236911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.568 BaseBdev1_malloc 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.568 [2024-11-20 13:29:38.904203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:57.568 [2024-11-20 13:29:38.904259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.568 [2024-11-20 13:29:38.904301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:57.568 [2024-11-20 13:29:38.904313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.568 [2024-11-20 13:29:38.906465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.568 [2024-11-20 13:29:38.906561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:57.568 BaseBdev1 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.568 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.568 BaseBdev2_malloc 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.569 [2024-11-20 13:29:38.932774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:57.569 [2024-11-20 13:29:38.932825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.569 [2024-11-20 13:29:38.932860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:57.569 [2024-11-20 13:29:38.932869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.569 [2024-11-20 13:29:38.935001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.569 [2024-11-20 13:29:38.935040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:57.569 BaseBdev2 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.569 spare_malloc 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.569 spare_delay 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.569 [2024-11-20 13:29:38.973855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:57.569 [2024-11-20 13:29:38.973926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.569 [2024-11-20 13:29:38.973968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:57.569 [2024-11-20 13:29:38.973977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.569 [2024-11-20 13:29:38.976223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.569 [2024-11-20 13:29:38.976261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:57.569 spare 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.569 [2024-11-20 13:29:38.985895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.569 [2024-11-20 13:29:38.987852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.569 [2024-11-20 13:29:38.988057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:57.569 [2024-11-20 13:29:38.988078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:57.569 [2024-11-20 13:29:38.988404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:57.569 [2024-11-20 13:29:38.988579] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:57.569 [2024-11-20 13:29:38.988593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:57.569 [2024-11-20 13:29:38.988745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.569 13:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.569 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.569 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.569 "name": "raid_bdev1", 00:15:57.569 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:15:57.569 "strip_size_kb": 0, 00:15:57.569 "state": "online", 00:15:57.569 "raid_level": "raid1", 00:15:57.569 "superblock": true, 00:15:57.569 "num_base_bdevs": 2, 00:15:57.569 "num_base_bdevs_discovered": 2, 00:15:57.569 "num_base_bdevs_operational": 2, 00:15:57.569 "base_bdevs_list": [ 00:15:57.569 { 00:15:57.569 "name": "BaseBdev1", 00:15:57.569 "uuid": "2787db17-b0ef-54b7-915a-7c18dfcfd56e", 00:15:57.569 "is_configured": true, 00:15:57.569 "data_offset": 256, 00:15:57.569 "data_size": 7936 00:15:57.569 }, 00:15:57.569 { 00:15:57.569 "name": "BaseBdev2", 00:15:57.569 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:15:57.569 "is_configured": true, 00:15:57.569 "data_offset": 256, 00:15:57.569 "data_size": 7936 00:15:57.569 } 00:15:57.569 ] 00:15:57.569 }' 00:15:57.569 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.569 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.828 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.828 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:57.828 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.828 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.828 [2024-11-20 13:29:39.429427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.828 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.828 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:57.828 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.828 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:57.828 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.828 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.828 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:58.087 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:58.345 [2024-11-20 13:29:39.756728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:58.345 /dev/nbd0 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.345 1+0 records in 00:15:58.345 1+0 records out 00:15:58.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605098 s, 6.8 MB/s 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:58.345 13:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:58.912 7936+0 records in 00:15:58.912 7936+0 records out 00:15:58.912 32505856 bytes (33 MB, 31 MiB) copied, 0.650229 s, 50.0 MB/s 00:15:58.912 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:58.912 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.912 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:58.912 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:58.912 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:58.912 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.912 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:59.171 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:59.171 [2024-11-20 13:29:40.705961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.172 [2024-11-20 13:29:40.731650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.172 "name": "raid_bdev1", 00:15:59.172 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:15:59.172 "strip_size_kb": 0, 00:15:59.172 "state": "online", 00:15:59.172 "raid_level": "raid1", 00:15:59.172 "superblock": true, 00:15:59.172 "num_base_bdevs": 2, 00:15:59.172 "num_base_bdevs_discovered": 1, 00:15:59.172 "num_base_bdevs_operational": 1, 00:15:59.172 "base_bdevs_list": [ 00:15:59.172 { 00:15:59.172 "name": null, 00:15:59.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.172 "is_configured": false, 00:15:59.172 "data_offset": 0, 00:15:59.172 "data_size": 7936 00:15:59.172 }, 00:15:59.172 { 00:15:59.172 "name": "BaseBdev2", 00:15:59.172 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:15:59.172 "is_configured": true, 00:15:59.172 "data_offset": 256, 00:15:59.172 "data_size": 7936 00:15:59.172 } 00:15:59.172 ] 00:15:59.172 }' 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.172 13:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.740 13:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.740 13:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.740 13:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.740 [2024-11-20 13:29:41.190875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.740 [2024-11-20 13:29:41.205498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:15:59.740 13:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.740 13:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:59.740 [2024-11-20 13:29:41.208138] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.686 "name": "raid_bdev1", 00:16:00.686 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:00.686 "strip_size_kb": 0, 00:16:00.686 "state": "online", 00:16:00.686 "raid_level": "raid1", 00:16:00.686 "superblock": true, 00:16:00.686 "num_base_bdevs": 2, 00:16:00.686 "num_base_bdevs_discovered": 2, 00:16:00.686 "num_base_bdevs_operational": 2, 00:16:00.686 "process": { 00:16:00.686 "type": "rebuild", 00:16:00.686 "target": "spare", 00:16:00.686 "progress": { 00:16:00.686 "blocks": 2560, 00:16:00.686 "percent": 32 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 "base_bdevs_list": [ 00:16:00.686 { 00:16:00.686 "name": "spare", 00:16:00.686 "uuid": "b46b3f16-c3ae-5abd-a0db-85d466a87969", 00:16:00.686 "is_configured": true, 00:16:00.686 "data_offset": 256, 00:16:00.686 "data_size": 7936 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "name": "BaseBdev2", 00:16:00.686 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:00.686 "is_configured": true, 00:16:00.686 "data_offset": 256, 00:16:00.686 "data_size": 7936 00:16:00.686 } 00:16:00.686 ] 00:16:00.686 }' 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.686 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.946 [2024-11-20 13:29:42.367753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.946 [2024-11-20 13:29:42.414154] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:00.946 [2024-11-20 13:29:42.414239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.946 [2024-11-20 13:29:42.414260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.946 [2024-11-20 13:29:42.414268] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.946 "name": "raid_bdev1", 00:16:00.946 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:00.946 "strip_size_kb": 0, 00:16:00.946 "state": "online", 00:16:00.946 "raid_level": "raid1", 00:16:00.946 "superblock": true, 00:16:00.946 "num_base_bdevs": 2, 00:16:00.946 "num_base_bdevs_discovered": 1, 00:16:00.946 "num_base_bdevs_operational": 1, 00:16:00.946 "base_bdevs_list": [ 00:16:00.946 { 00:16:00.946 "name": null, 00:16:00.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.946 "is_configured": false, 00:16:00.946 "data_offset": 0, 00:16:00.946 "data_size": 7936 00:16:00.946 }, 00:16:00.946 { 00:16:00.946 "name": "BaseBdev2", 00:16:00.946 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:00.946 "is_configured": true, 00:16:00.946 "data_offset": 256, 00:16:00.946 "data_size": 7936 00:16:00.946 } 00:16:00.946 ] 00:16:00.946 }' 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.946 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.205 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.205 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.205 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.205 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.205 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.205 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.205 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.205 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.205 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.205 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.464 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.464 "name": "raid_bdev1", 00:16:01.464 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:01.464 "strip_size_kb": 0, 00:16:01.464 "state": "online", 00:16:01.464 "raid_level": "raid1", 00:16:01.464 "superblock": true, 00:16:01.464 "num_base_bdevs": 2, 00:16:01.464 "num_base_bdevs_discovered": 1, 00:16:01.464 "num_base_bdevs_operational": 1, 00:16:01.464 "base_bdevs_list": [ 00:16:01.464 { 00:16:01.464 "name": null, 00:16:01.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.464 "is_configured": false, 00:16:01.464 "data_offset": 0, 00:16:01.464 "data_size": 7936 00:16:01.464 }, 00:16:01.464 { 00:16:01.464 "name": "BaseBdev2", 00:16:01.464 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:01.464 "is_configured": true, 00:16:01.464 "data_offset": 256, 00:16:01.464 "data_size": 7936 00:16:01.464 } 00:16:01.464 ] 00:16:01.464 }' 00:16:01.464 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.464 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:01.464 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.464 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.464 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.464 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.464 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.464 [2024-11-20 13:29:42.978630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.464 [2024-11-20 13:29:42.983978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:16:01.464 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.464 13:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:01.464 [2024-11-20 13:29:42.986274] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.401 13:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.401 13:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.401 13:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.401 13:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.401 13:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.401 13:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.401 13:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.401 13:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.401 13:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.401 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.401 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.401 "name": "raid_bdev1", 00:16:02.401 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:02.401 "strip_size_kb": 0, 00:16:02.401 "state": "online", 00:16:02.401 "raid_level": "raid1", 00:16:02.401 "superblock": true, 00:16:02.401 "num_base_bdevs": 2, 00:16:02.401 "num_base_bdevs_discovered": 2, 00:16:02.401 "num_base_bdevs_operational": 2, 00:16:02.401 "process": { 00:16:02.401 "type": "rebuild", 00:16:02.401 "target": "spare", 00:16:02.401 "progress": { 00:16:02.401 "blocks": 2560, 00:16:02.401 "percent": 32 00:16:02.401 } 00:16:02.401 }, 00:16:02.401 "base_bdevs_list": [ 00:16:02.401 { 00:16:02.401 "name": "spare", 00:16:02.401 "uuid": "b46b3f16-c3ae-5abd-a0db-85d466a87969", 00:16:02.401 "is_configured": true, 00:16:02.401 "data_offset": 256, 00:16:02.401 "data_size": 7936 00:16:02.401 }, 00:16:02.401 { 00:16:02.401 "name": "BaseBdev2", 00:16:02.401 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:02.401 "is_configured": true, 00:16:02.401 "data_offset": 256, 00:16:02.401 "data_size": 7936 00:16:02.401 } 00:16:02.401 ] 00:16:02.401 }' 00:16:02.401 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.660 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.660 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.660 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.660 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:02.660 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:02.660 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:02.660 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:02.660 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=573 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.661 "name": "raid_bdev1", 00:16:02.661 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:02.661 "strip_size_kb": 0, 00:16:02.661 "state": "online", 00:16:02.661 "raid_level": "raid1", 00:16:02.661 "superblock": true, 00:16:02.661 "num_base_bdevs": 2, 00:16:02.661 "num_base_bdevs_discovered": 2, 00:16:02.661 "num_base_bdevs_operational": 2, 00:16:02.661 "process": { 00:16:02.661 "type": "rebuild", 00:16:02.661 "target": "spare", 00:16:02.661 "progress": { 00:16:02.661 "blocks": 2816, 00:16:02.661 "percent": 35 00:16:02.661 } 00:16:02.661 }, 00:16:02.661 "base_bdevs_list": [ 00:16:02.661 { 00:16:02.661 "name": "spare", 00:16:02.661 "uuid": "b46b3f16-c3ae-5abd-a0db-85d466a87969", 00:16:02.661 "is_configured": true, 00:16:02.661 "data_offset": 256, 00:16:02.661 "data_size": 7936 00:16:02.661 }, 00:16:02.661 { 00:16:02.661 "name": "BaseBdev2", 00:16:02.661 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:02.661 "is_configured": true, 00:16:02.661 "data_offset": 256, 00:16:02.661 "data_size": 7936 00:16:02.661 } 00:16:02.661 ] 00:16:02.661 }' 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.661 13:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.039 "name": "raid_bdev1", 00:16:04.039 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:04.039 "strip_size_kb": 0, 00:16:04.039 "state": "online", 00:16:04.039 "raid_level": "raid1", 00:16:04.039 "superblock": true, 00:16:04.039 "num_base_bdevs": 2, 00:16:04.039 "num_base_bdevs_discovered": 2, 00:16:04.039 "num_base_bdevs_operational": 2, 00:16:04.039 "process": { 00:16:04.039 "type": "rebuild", 00:16:04.039 "target": "spare", 00:16:04.039 "progress": { 00:16:04.039 "blocks": 5632, 00:16:04.039 "percent": 70 00:16:04.039 } 00:16:04.039 }, 00:16:04.039 "base_bdevs_list": [ 00:16:04.039 { 00:16:04.039 "name": "spare", 00:16:04.039 "uuid": "b46b3f16-c3ae-5abd-a0db-85d466a87969", 00:16:04.039 "is_configured": true, 00:16:04.039 "data_offset": 256, 00:16:04.039 "data_size": 7936 00:16:04.039 }, 00:16:04.039 { 00:16:04.039 "name": "BaseBdev2", 00:16:04.039 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:04.039 "is_configured": true, 00:16:04.039 "data_offset": 256, 00:16:04.039 "data_size": 7936 00:16:04.039 } 00:16:04.039 ] 00:16:04.039 }' 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.039 13:29:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.606 [2024-11-20 13:29:46.100067] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:04.606 [2024-11-20 13:29:46.100172] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:04.606 [2024-11-20 13:29:46.100323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.864 "name": "raid_bdev1", 00:16:04.864 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:04.864 "strip_size_kb": 0, 00:16:04.864 "state": "online", 00:16:04.864 "raid_level": "raid1", 00:16:04.864 "superblock": true, 00:16:04.864 "num_base_bdevs": 2, 00:16:04.864 "num_base_bdevs_discovered": 2, 00:16:04.864 "num_base_bdevs_operational": 2, 00:16:04.864 "base_bdevs_list": [ 00:16:04.864 { 00:16:04.864 "name": "spare", 00:16:04.864 "uuid": "b46b3f16-c3ae-5abd-a0db-85d466a87969", 00:16:04.864 "is_configured": true, 00:16:04.864 "data_offset": 256, 00:16:04.864 "data_size": 7936 00:16:04.864 }, 00:16:04.864 { 00:16:04.864 "name": "BaseBdev2", 00:16:04.864 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:04.864 "is_configured": true, 00:16:04.864 "data_offset": 256, 00:16:04.864 "data_size": 7936 00:16:04.864 } 00:16:04.864 ] 00:16:04.864 }' 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:04.864 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.123 "name": "raid_bdev1", 00:16:05.123 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:05.123 "strip_size_kb": 0, 00:16:05.123 "state": "online", 00:16:05.123 "raid_level": "raid1", 00:16:05.123 "superblock": true, 00:16:05.123 "num_base_bdevs": 2, 00:16:05.123 "num_base_bdevs_discovered": 2, 00:16:05.123 "num_base_bdevs_operational": 2, 00:16:05.123 "base_bdevs_list": [ 00:16:05.123 { 00:16:05.123 "name": "spare", 00:16:05.123 "uuid": "b46b3f16-c3ae-5abd-a0db-85d466a87969", 00:16:05.123 "is_configured": true, 00:16:05.123 "data_offset": 256, 00:16:05.123 "data_size": 7936 00:16:05.123 }, 00:16:05.123 { 00:16:05.123 "name": "BaseBdev2", 00:16:05.123 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:05.123 "is_configured": true, 00:16:05.123 "data_offset": 256, 00:16:05.123 "data_size": 7936 00:16:05.123 } 00:16:05.123 ] 00:16:05.123 }' 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.123 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.124 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.124 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.124 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.124 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.124 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.124 "name": "raid_bdev1", 00:16:05.124 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:05.124 "strip_size_kb": 0, 00:16:05.124 "state": "online", 00:16:05.124 "raid_level": "raid1", 00:16:05.124 "superblock": true, 00:16:05.124 "num_base_bdevs": 2, 00:16:05.124 "num_base_bdevs_discovered": 2, 00:16:05.124 "num_base_bdevs_operational": 2, 00:16:05.124 "base_bdevs_list": [ 00:16:05.124 { 00:16:05.124 "name": "spare", 00:16:05.124 "uuid": "b46b3f16-c3ae-5abd-a0db-85d466a87969", 00:16:05.124 "is_configured": true, 00:16:05.124 "data_offset": 256, 00:16:05.124 "data_size": 7936 00:16:05.124 }, 00:16:05.124 { 00:16:05.124 "name": "BaseBdev2", 00:16:05.124 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:05.124 "is_configured": true, 00:16:05.124 "data_offset": 256, 00:16:05.124 "data_size": 7936 00:16:05.124 } 00:16:05.124 ] 00:16:05.124 }' 00:16:05.124 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.124 13:29:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.692 [2024-11-20 13:29:47.135607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.692 [2024-11-20 13:29:47.135717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.692 [2024-11-20 13:29:47.135866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.692 [2024-11-20 13:29:47.135971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.692 [2024-11-20 13:29:47.136055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.692 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:05.951 /dev/nbd0 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.951 1+0 records in 00:16:05.951 1+0 records out 00:16:05.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547287 s, 7.5 MB/s 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.951 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:06.211 /dev/nbd1 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.211 1+0 records in 00:16:06.211 1+0 records out 00:16:06.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465456 s, 8.8 MB/s 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.211 13:29:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:06.471 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.471 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.471 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.471 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.471 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.471 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.471 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:06.471 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.471 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.471 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.730 [2024-11-20 13:29:48.386798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.730 [2024-11-20 13:29:48.386861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.730 [2024-11-20 13:29:48.386882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:06.730 [2024-11-20 13:29:48.386895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.730 [2024-11-20 13:29:48.389269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.730 [2024-11-20 13:29:48.389314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.730 [2024-11-20 13:29:48.389396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:06.730 [2024-11-20 13:29:48.389448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.730 [2024-11-20 13:29:48.389570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.730 spare 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.730 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.988 [2024-11-20 13:29:48.489503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:06.988 [2024-11-20 13:29:48.489577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:06.988 [2024-11-20 13:29:48.489961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:16:06.988 [2024-11-20 13:29:48.490200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:06.988 [2024-11-20 13:29:48.490225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:06.988 [2024-11-20 13:29:48.490406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.988 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.988 "name": "raid_bdev1", 00:16:06.988 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:06.988 "strip_size_kb": 0, 00:16:06.988 "state": "online", 00:16:06.988 "raid_level": "raid1", 00:16:06.988 "superblock": true, 00:16:06.988 "num_base_bdevs": 2, 00:16:06.988 "num_base_bdevs_discovered": 2, 00:16:06.989 "num_base_bdevs_operational": 2, 00:16:06.989 "base_bdevs_list": [ 00:16:06.989 { 00:16:06.989 "name": "spare", 00:16:06.989 "uuid": "b46b3f16-c3ae-5abd-a0db-85d466a87969", 00:16:06.989 "is_configured": true, 00:16:06.989 "data_offset": 256, 00:16:06.989 "data_size": 7936 00:16:06.989 }, 00:16:06.989 { 00:16:06.989 "name": "BaseBdev2", 00:16:06.989 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:06.989 "is_configured": true, 00:16:06.989 "data_offset": 256, 00:16:06.989 "data_size": 7936 00:16:06.989 } 00:16:06.989 ] 00:16:06.989 }' 00:16:06.989 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.989 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.558 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.558 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.558 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.558 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.558 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.558 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.558 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.558 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.558 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.558 13:29:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.558 "name": "raid_bdev1", 00:16:07.558 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:07.558 "strip_size_kb": 0, 00:16:07.558 "state": "online", 00:16:07.558 "raid_level": "raid1", 00:16:07.558 "superblock": true, 00:16:07.558 "num_base_bdevs": 2, 00:16:07.558 "num_base_bdevs_discovered": 2, 00:16:07.558 "num_base_bdevs_operational": 2, 00:16:07.558 "base_bdevs_list": [ 00:16:07.558 { 00:16:07.558 "name": "spare", 00:16:07.558 "uuid": "b46b3f16-c3ae-5abd-a0db-85d466a87969", 00:16:07.558 "is_configured": true, 00:16:07.558 "data_offset": 256, 00:16:07.558 "data_size": 7936 00:16:07.558 }, 00:16:07.558 { 00:16:07.558 "name": "BaseBdev2", 00:16:07.558 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:07.558 "is_configured": true, 00:16:07.558 "data_offset": 256, 00:16:07.558 "data_size": 7936 00:16:07.558 } 00:16:07.558 ] 00:16:07.558 }' 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.558 [2024-11-20 13:29:49.157610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.558 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.558 "name": "raid_bdev1", 00:16:07.558 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:07.558 "strip_size_kb": 0, 00:16:07.558 "state": "online", 00:16:07.558 "raid_level": "raid1", 00:16:07.558 "superblock": true, 00:16:07.558 "num_base_bdevs": 2, 00:16:07.558 "num_base_bdevs_discovered": 1, 00:16:07.559 "num_base_bdevs_operational": 1, 00:16:07.559 "base_bdevs_list": [ 00:16:07.559 { 00:16:07.559 "name": null, 00:16:07.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.559 "is_configured": false, 00:16:07.559 "data_offset": 0, 00:16:07.559 "data_size": 7936 00:16:07.559 }, 00:16:07.559 { 00:16:07.559 "name": "BaseBdev2", 00:16:07.559 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:07.559 "is_configured": true, 00:16:07.559 "data_offset": 256, 00:16:07.559 "data_size": 7936 00:16:07.559 } 00:16:07.559 ] 00:16:07.559 }' 00:16:07.559 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.559 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.127 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:08.127 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.127 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.127 [2024-11-20 13:29:49.688727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.127 [2024-11-20 13:29:49.688952] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:08.127 [2024-11-20 13:29:49.688968] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:08.127 [2024-11-20 13:29:49.689025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.127 [2024-11-20 13:29:49.694056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:16:08.127 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.127 13:29:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:08.127 [2024-11-20 13:29:49.696299] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:09.065 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.065 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.065 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.065 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.065 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.065 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.065 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.065 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.065 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.065 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.324 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.324 "name": "raid_bdev1", 00:16:09.324 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:09.324 "strip_size_kb": 0, 00:16:09.324 "state": "online", 00:16:09.324 "raid_level": "raid1", 00:16:09.324 "superblock": true, 00:16:09.324 "num_base_bdevs": 2, 00:16:09.324 "num_base_bdevs_discovered": 2, 00:16:09.324 "num_base_bdevs_operational": 2, 00:16:09.324 "process": { 00:16:09.324 "type": "rebuild", 00:16:09.324 "target": "spare", 00:16:09.324 "progress": { 00:16:09.324 "blocks": 2560, 00:16:09.324 "percent": 32 00:16:09.324 } 00:16:09.324 }, 00:16:09.324 "base_bdevs_list": [ 00:16:09.324 { 00:16:09.324 "name": "spare", 00:16:09.324 "uuid": "b46b3f16-c3ae-5abd-a0db-85d466a87969", 00:16:09.324 "is_configured": true, 00:16:09.324 "data_offset": 256, 00:16:09.324 "data_size": 7936 00:16:09.324 }, 00:16:09.324 { 00:16:09.324 "name": "BaseBdev2", 00:16:09.324 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:09.324 "is_configured": true, 00:16:09.324 "data_offset": 256, 00:16:09.324 "data_size": 7936 00:16:09.324 } 00:16:09.324 ] 00:16:09.324 }' 00:16:09.324 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.324 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.324 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.324 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.324 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:09.324 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.324 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.324 [2024-11-20 13:29:50.836166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.325 [2024-11-20 13:29:50.901570] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.325 [2024-11-20 13:29:50.901672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.325 [2024-11-20 13:29:50.901690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.325 [2024-11-20 13:29:50.901698] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.325 "name": "raid_bdev1", 00:16:09.325 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:09.325 "strip_size_kb": 0, 00:16:09.325 "state": "online", 00:16:09.325 "raid_level": "raid1", 00:16:09.325 "superblock": true, 00:16:09.325 "num_base_bdevs": 2, 00:16:09.325 "num_base_bdevs_discovered": 1, 00:16:09.325 "num_base_bdevs_operational": 1, 00:16:09.325 "base_bdevs_list": [ 00:16:09.325 { 00:16:09.325 "name": null, 00:16:09.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.325 "is_configured": false, 00:16:09.325 "data_offset": 0, 00:16:09.325 "data_size": 7936 00:16:09.325 }, 00:16:09.325 { 00:16:09.325 "name": "BaseBdev2", 00:16:09.325 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:09.325 "is_configured": true, 00:16:09.325 "data_offset": 256, 00:16:09.325 "data_size": 7936 00:16:09.325 } 00:16:09.325 ] 00:16:09.325 }' 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.325 13:29:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.891 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.891 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.891 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.891 [2024-11-20 13:29:51.309925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.891 [2024-11-20 13:29:51.310016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.891 [2024-11-20 13:29:51.310048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:09.891 [2024-11-20 13:29:51.310060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.891 [2024-11-20 13:29:51.310538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.891 [2024-11-20 13:29:51.310571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.891 [2024-11-20 13:29:51.310673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:09.891 [2024-11-20 13:29:51.310693] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:09.891 [2024-11-20 13:29:51.310711] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:09.891 [2024-11-20 13:29:51.310737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.891 [2024-11-20 13:29:51.315622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:16:09.891 spare 00:16:09.892 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.892 13:29:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:09.892 [2024-11-20 13:29:51.317701] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.827 "name": "raid_bdev1", 00:16:10.827 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:10.827 "strip_size_kb": 0, 00:16:10.827 "state": "online", 00:16:10.827 "raid_level": "raid1", 00:16:10.827 "superblock": true, 00:16:10.827 "num_base_bdevs": 2, 00:16:10.827 "num_base_bdevs_discovered": 2, 00:16:10.827 "num_base_bdevs_operational": 2, 00:16:10.827 "process": { 00:16:10.827 "type": "rebuild", 00:16:10.827 "target": "spare", 00:16:10.827 "progress": { 00:16:10.827 "blocks": 2560, 00:16:10.827 "percent": 32 00:16:10.827 } 00:16:10.827 }, 00:16:10.827 "base_bdevs_list": [ 00:16:10.827 { 00:16:10.827 "name": "spare", 00:16:10.827 "uuid": "b46b3f16-c3ae-5abd-a0db-85d466a87969", 00:16:10.827 "is_configured": true, 00:16:10.827 "data_offset": 256, 00:16:10.827 "data_size": 7936 00:16:10.827 }, 00:16:10.827 { 00:16:10.827 "name": "BaseBdev2", 00:16:10.827 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:10.827 "is_configured": true, 00:16:10.827 "data_offset": 256, 00:16:10.827 "data_size": 7936 00:16:10.827 } 00:16:10.827 ] 00:16:10.827 }' 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.827 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.827 [2024-11-20 13:29:52.461627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.085 [2024-11-20 13:29:52.523173] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:11.085 [2024-11-20 13:29:52.523299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.085 [2024-11-20 13:29:52.523318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.085 [2024-11-20 13:29:52.523328] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.085 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.086 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.086 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.086 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.086 "name": "raid_bdev1", 00:16:11.086 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:11.086 "strip_size_kb": 0, 00:16:11.086 "state": "online", 00:16:11.086 "raid_level": "raid1", 00:16:11.086 "superblock": true, 00:16:11.086 "num_base_bdevs": 2, 00:16:11.086 "num_base_bdevs_discovered": 1, 00:16:11.086 "num_base_bdevs_operational": 1, 00:16:11.086 "base_bdevs_list": [ 00:16:11.086 { 00:16:11.086 "name": null, 00:16:11.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.086 "is_configured": false, 00:16:11.086 "data_offset": 0, 00:16:11.086 "data_size": 7936 00:16:11.086 }, 00:16:11.086 { 00:16:11.086 "name": "BaseBdev2", 00:16:11.086 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:11.086 "is_configured": true, 00:16:11.086 "data_offset": 256, 00:16:11.086 "data_size": 7936 00:16:11.086 } 00:16:11.086 ] 00:16:11.086 }' 00:16:11.086 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.086 13:29:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.344 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.344 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.344 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.344 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.344 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.602 "name": "raid_bdev1", 00:16:11.602 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:11.602 "strip_size_kb": 0, 00:16:11.602 "state": "online", 00:16:11.602 "raid_level": "raid1", 00:16:11.602 "superblock": true, 00:16:11.602 "num_base_bdevs": 2, 00:16:11.602 "num_base_bdevs_discovered": 1, 00:16:11.602 "num_base_bdevs_operational": 1, 00:16:11.602 "base_bdevs_list": [ 00:16:11.602 { 00:16:11.602 "name": null, 00:16:11.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.602 "is_configured": false, 00:16:11.602 "data_offset": 0, 00:16:11.602 "data_size": 7936 00:16:11.602 }, 00:16:11.602 { 00:16:11.602 "name": "BaseBdev2", 00:16:11.602 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:11.602 "is_configured": true, 00:16:11.602 "data_offset": 256, 00:16:11.602 "data_size": 7936 00:16:11.602 } 00:16:11.602 ] 00:16:11.602 }' 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.602 [2024-11-20 13:29:53.139515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.602 [2024-11-20 13:29:53.139585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.602 [2024-11-20 13:29:53.139626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:11.602 [2024-11-20 13:29:53.139639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.602 [2024-11-20 13:29:53.140135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.602 [2024-11-20 13:29:53.140168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.602 [2024-11-20 13:29:53.140256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:11.602 [2024-11-20 13:29:53.140299] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:11.602 [2024-11-20 13:29:53.140323] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:11.602 [2024-11-20 13:29:53.140338] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:11.602 BaseBdev1 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.602 13:29:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.536 "name": "raid_bdev1", 00:16:12.536 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:12.536 "strip_size_kb": 0, 00:16:12.536 "state": "online", 00:16:12.536 "raid_level": "raid1", 00:16:12.536 "superblock": true, 00:16:12.536 "num_base_bdevs": 2, 00:16:12.536 "num_base_bdevs_discovered": 1, 00:16:12.536 "num_base_bdevs_operational": 1, 00:16:12.536 "base_bdevs_list": [ 00:16:12.536 { 00:16:12.536 "name": null, 00:16:12.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.536 "is_configured": false, 00:16:12.536 "data_offset": 0, 00:16:12.536 "data_size": 7936 00:16:12.536 }, 00:16:12.536 { 00:16:12.536 "name": "BaseBdev2", 00:16:12.536 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:12.536 "is_configured": true, 00:16:12.536 "data_offset": 256, 00:16:12.536 "data_size": 7936 00:16:12.536 } 00:16:12.536 ] 00:16:12.536 }' 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.536 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.104 "name": "raid_bdev1", 00:16:13.104 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:13.104 "strip_size_kb": 0, 00:16:13.104 "state": "online", 00:16:13.104 "raid_level": "raid1", 00:16:13.104 "superblock": true, 00:16:13.104 "num_base_bdevs": 2, 00:16:13.104 "num_base_bdevs_discovered": 1, 00:16:13.104 "num_base_bdevs_operational": 1, 00:16:13.104 "base_bdevs_list": [ 00:16:13.104 { 00:16:13.104 "name": null, 00:16:13.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.104 "is_configured": false, 00:16:13.104 "data_offset": 0, 00:16:13.104 "data_size": 7936 00:16:13.104 }, 00:16:13.104 { 00:16:13.104 "name": "BaseBdev2", 00:16:13.104 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:13.104 "is_configured": true, 00:16:13.104 "data_offset": 256, 00:16:13.104 "data_size": 7936 00:16:13.104 } 00:16:13.104 ] 00:16:13.104 }' 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.104 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.362 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.363 [2024-11-20 13:29:54.788742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.363 [2024-11-20 13:29:54.788939] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.363 [2024-11-20 13:29:54.788962] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:13.363 request: 00:16:13.363 { 00:16:13.363 "base_bdev": "BaseBdev1", 00:16:13.363 "raid_bdev": "raid_bdev1", 00:16:13.363 "method": "bdev_raid_add_base_bdev", 00:16:13.363 "req_id": 1 00:16:13.363 } 00:16:13.363 Got JSON-RPC error response 00:16:13.363 response: 00:16:13.363 { 00:16:13.363 "code": -22, 00:16:13.363 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:13.363 } 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:13.363 13:29:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:14.299 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.299 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.299 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.299 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.299 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.299 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:14.299 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.299 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.300 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.300 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.300 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.300 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.300 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.300 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.300 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.300 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.300 "name": "raid_bdev1", 00:16:14.300 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:14.300 "strip_size_kb": 0, 00:16:14.300 "state": "online", 00:16:14.300 "raid_level": "raid1", 00:16:14.300 "superblock": true, 00:16:14.300 "num_base_bdevs": 2, 00:16:14.300 "num_base_bdevs_discovered": 1, 00:16:14.300 "num_base_bdevs_operational": 1, 00:16:14.300 "base_bdevs_list": [ 00:16:14.300 { 00:16:14.300 "name": null, 00:16:14.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.300 "is_configured": false, 00:16:14.300 "data_offset": 0, 00:16:14.300 "data_size": 7936 00:16:14.300 }, 00:16:14.300 { 00:16:14.300 "name": "BaseBdev2", 00:16:14.300 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:14.300 "is_configured": true, 00:16:14.300 "data_offset": 256, 00:16:14.300 "data_size": 7936 00:16:14.300 } 00:16:14.300 ] 00:16:14.300 }' 00:16:14.300 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.300 13:29:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.869 "name": "raid_bdev1", 00:16:14.869 "uuid": "57bedf4c-7b23-4740-b127-56fd9c6256bb", 00:16:14.869 "strip_size_kb": 0, 00:16:14.869 "state": "online", 00:16:14.869 "raid_level": "raid1", 00:16:14.869 "superblock": true, 00:16:14.869 "num_base_bdevs": 2, 00:16:14.869 "num_base_bdevs_discovered": 1, 00:16:14.869 "num_base_bdevs_operational": 1, 00:16:14.869 "base_bdevs_list": [ 00:16:14.869 { 00:16:14.869 "name": null, 00:16:14.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.869 "is_configured": false, 00:16:14.869 "data_offset": 0, 00:16:14.869 "data_size": 7936 00:16:14.869 }, 00:16:14.869 { 00:16:14.869 "name": "BaseBdev2", 00:16:14.869 "uuid": "46ca27e9-30fc-5eb2-818f-a5ec99293dea", 00:16:14.869 "is_configured": true, 00:16:14.869 "data_offset": 256, 00:16:14.869 "data_size": 7936 00:16:14.869 } 00:16:14.869 ] 00:16:14.869 }' 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96635 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 96635 ']' 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 96635 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96635 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.869 killing process with pid 96635 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96635' 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 96635 00:16:14.869 Received shutdown signal, test time was about 60.000000 seconds 00:16:14.869 00:16:14.869 Latency(us) 00:16:14.869 [2024-11-20T13:29:56.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.869 [2024-11-20T13:29:56.537Z] =================================================================================================================== 00:16:14.869 [2024-11-20T13:29:56.537Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:14.869 [2024-11-20 13:29:56.471588] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.869 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 96635 00:16:14.869 [2024-11-20 13:29:56.471742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.869 [2024-11-20 13:29:56.471815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.869 [2024-11-20 13:29:56.471825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:14.869 [2024-11-20 13:29:56.504284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.129 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:15.129 00:16:15.129 real 0m18.796s 00:16:15.129 user 0m25.147s 00:16:15.129 sys 0m2.716s 00:16:15.129 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.129 13:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.129 ************************************ 00:16:15.129 END TEST raid_rebuild_test_sb_4k 00:16:15.129 ************************************ 00:16:15.129 13:29:56 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:15.129 13:29:56 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:15.129 13:29:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:15.129 13:29:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.129 13:29:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.129 ************************************ 00:16:15.129 START TEST raid_state_function_test_sb_md_separate 00:16:15.129 ************************************ 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97314 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97314' 00:16:15.129 Process raid pid: 97314 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97314 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 97314 ']' 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.129 13:29:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.394 [2024-11-20 13:29:56.865909] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:15.394 [2024-11-20 13:29:56.866451] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.394 [2024-11-20 13:29:57.008320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.394 [2024-11-20 13:29:57.040435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.663 [2024-11-20 13:29:57.084278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.663 [2024-11-20 13:29:57.084331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.233 [2024-11-20 13:29:57.794249] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.233 [2024-11-20 13:29:57.794331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.233 [2024-11-20 13:29:57.794343] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.233 [2024-11-20 13:29:57.794355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.233 "name": "Existed_Raid", 00:16:16.233 "uuid": "650b0e6b-3f26-4c19-acdf-1831928df8e5", 00:16:16.233 "strip_size_kb": 0, 00:16:16.233 "state": "configuring", 00:16:16.233 "raid_level": "raid1", 00:16:16.233 "superblock": true, 00:16:16.233 "num_base_bdevs": 2, 00:16:16.233 "num_base_bdevs_discovered": 0, 00:16:16.233 "num_base_bdevs_operational": 2, 00:16:16.233 "base_bdevs_list": [ 00:16:16.233 { 00:16:16.233 "name": "BaseBdev1", 00:16:16.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.233 "is_configured": false, 00:16:16.233 "data_offset": 0, 00:16:16.233 "data_size": 0 00:16:16.233 }, 00:16:16.233 { 00:16:16.233 "name": "BaseBdev2", 00:16:16.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.233 "is_configured": false, 00:16:16.233 "data_offset": 0, 00:16:16.233 "data_size": 0 00:16:16.233 } 00:16:16.233 ] 00:16:16.233 }' 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.233 13:29:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.803 [2024-11-20 13:29:58.233350] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:16.803 [2024-11-20 13:29:58.233406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.803 [2024-11-20 13:29:58.245365] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.803 [2024-11-20 13:29:58.245421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.803 [2024-11-20 13:29:58.245431] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.803 [2024-11-20 13:29:58.245451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.803 [2024-11-20 13:29:58.266899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.803 BaseBdev1 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.803 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.803 [ 00:16:16.803 { 00:16:16.803 "name": "BaseBdev1", 00:16:16.803 "aliases": [ 00:16:16.803 "258754ac-bbe7-4d25-84f5-d8bdeef117c3" 00:16:16.803 ], 00:16:16.803 "product_name": "Malloc disk", 00:16:16.803 "block_size": 4096, 00:16:16.803 "num_blocks": 8192, 00:16:16.803 "uuid": "258754ac-bbe7-4d25-84f5-d8bdeef117c3", 00:16:16.803 "md_size": 32, 00:16:16.803 "md_interleave": false, 00:16:16.803 "dif_type": 0, 00:16:16.803 "assigned_rate_limits": { 00:16:16.803 "rw_ios_per_sec": 0, 00:16:16.803 "rw_mbytes_per_sec": 0, 00:16:16.803 "r_mbytes_per_sec": 0, 00:16:16.803 "w_mbytes_per_sec": 0 00:16:16.803 }, 00:16:16.803 "claimed": true, 00:16:16.803 "claim_type": "exclusive_write", 00:16:16.803 "zoned": false, 00:16:16.803 "supported_io_types": { 00:16:16.803 "read": true, 00:16:16.803 "write": true, 00:16:16.803 "unmap": true, 00:16:16.803 "flush": true, 00:16:16.803 "reset": true, 00:16:16.803 "nvme_admin": false, 00:16:16.803 "nvme_io": false, 00:16:16.803 "nvme_io_md": false, 00:16:16.803 "write_zeroes": true, 00:16:16.803 "zcopy": true, 00:16:16.803 "get_zone_info": false, 00:16:16.803 "zone_management": false, 00:16:16.803 "zone_append": false, 00:16:16.803 "compare": false, 00:16:16.803 "compare_and_write": false, 00:16:16.803 "abort": true, 00:16:16.804 "seek_hole": false, 00:16:16.804 "seek_data": false, 00:16:16.804 "copy": true, 00:16:16.804 "nvme_iov_md": false 00:16:16.804 }, 00:16:16.804 "memory_domains": [ 00:16:16.804 { 00:16:16.804 "dma_device_id": "system", 00:16:16.804 "dma_device_type": 1 00:16:16.804 }, 00:16:16.804 { 00:16:16.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.804 "dma_device_type": 2 00:16:16.804 } 00:16:16.804 ], 00:16:16.804 "driver_specific": {} 00:16:16.804 } 00:16:16.804 ] 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.804 "name": "Existed_Raid", 00:16:16.804 "uuid": "2fa4b3c3-ca92-476d-a664-ba75e70f6699", 00:16:16.804 "strip_size_kb": 0, 00:16:16.804 "state": "configuring", 00:16:16.804 "raid_level": "raid1", 00:16:16.804 "superblock": true, 00:16:16.804 "num_base_bdevs": 2, 00:16:16.804 "num_base_bdevs_discovered": 1, 00:16:16.804 "num_base_bdevs_operational": 2, 00:16:16.804 "base_bdevs_list": [ 00:16:16.804 { 00:16:16.804 "name": "BaseBdev1", 00:16:16.804 "uuid": "258754ac-bbe7-4d25-84f5-d8bdeef117c3", 00:16:16.804 "is_configured": true, 00:16:16.804 "data_offset": 256, 00:16:16.804 "data_size": 7936 00:16:16.804 }, 00:16:16.804 { 00:16:16.804 "name": "BaseBdev2", 00:16:16.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.804 "is_configured": false, 00:16:16.804 "data_offset": 0, 00:16:16.804 "data_size": 0 00:16:16.804 } 00:16:16.804 ] 00:16:16.804 }' 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.804 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.065 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:17.065 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.065 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.065 [2024-11-20 13:29:58.726207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.065 [2024-11-20 13:29:58.726271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:16:17.065 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.065 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:17.065 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.065 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.324 [2024-11-20 13:29:58.738220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.324 [2024-11-20 13:29:58.740162] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.324 [2024-11-20 13:29:58.740224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.324 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.324 "name": "Existed_Raid", 00:16:17.325 "uuid": "13c90058-643d-4a3c-afc1-7b0ecfa792db", 00:16:17.325 "strip_size_kb": 0, 00:16:17.325 "state": "configuring", 00:16:17.325 "raid_level": "raid1", 00:16:17.325 "superblock": true, 00:16:17.325 "num_base_bdevs": 2, 00:16:17.325 "num_base_bdevs_discovered": 1, 00:16:17.325 "num_base_bdevs_operational": 2, 00:16:17.325 "base_bdevs_list": [ 00:16:17.325 { 00:16:17.325 "name": "BaseBdev1", 00:16:17.325 "uuid": "258754ac-bbe7-4d25-84f5-d8bdeef117c3", 00:16:17.325 "is_configured": true, 00:16:17.325 "data_offset": 256, 00:16:17.325 "data_size": 7936 00:16:17.325 }, 00:16:17.325 { 00:16:17.325 "name": "BaseBdev2", 00:16:17.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.325 "is_configured": false, 00:16:17.325 "data_offset": 0, 00:16:17.325 "data_size": 0 00:16:17.325 } 00:16:17.325 ] 00:16:17.325 }' 00:16:17.325 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.325 13:29:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.585 [2024-11-20 13:29:59.201325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.585 [2024-11-20 13:29:59.201552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:17.585 [2024-11-20 13:29:59.201568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:17.585 [2024-11-20 13:29:59.201670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:17.585 [2024-11-20 13:29:59.201800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:17.585 [2024-11-20 13:29:59.201825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:16:17.585 [2024-11-20 13:29:59.201905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.585 BaseBdev2 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.585 [ 00:16:17.585 { 00:16:17.585 "name": "BaseBdev2", 00:16:17.585 "aliases": [ 00:16:17.585 "cac83802-11d6-4a6c-88f8-164702d244ea" 00:16:17.585 ], 00:16:17.585 "product_name": "Malloc disk", 00:16:17.585 "block_size": 4096, 00:16:17.585 "num_blocks": 8192, 00:16:17.585 "uuid": "cac83802-11d6-4a6c-88f8-164702d244ea", 00:16:17.585 "md_size": 32, 00:16:17.585 "md_interleave": false, 00:16:17.585 "dif_type": 0, 00:16:17.585 "assigned_rate_limits": { 00:16:17.585 "rw_ios_per_sec": 0, 00:16:17.585 "rw_mbytes_per_sec": 0, 00:16:17.585 "r_mbytes_per_sec": 0, 00:16:17.585 "w_mbytes_per_sec": 0 00:16:17.585 }, 00:16:17.585 "claimed": true, 00:16:17.585 "claim_type": "exclusive_write", 00:16:17.585 "zoned": false, 00:16:17.585 "supported_io_types": { 00:16:17.585 "read": true, 00:16:17.585 "write": true, 00:16:17.585 "unmap": true, 00:16:17.585 "flush": true, 00:16:17.585 "reset": true, 00:16:17.585 "nvme_admin": false, 00:16:17.585 "nvme_io": false, 00:16:17.585 "nvme_io_md": false, 00:16:17.585 "write_zeroes": true, 00:16:17.585 "zcopy": true, 00:16:17.585 "get_zone_info": false, 00:16:17.585 "zone_management": false, 00:16:17.585 "zone_append": false, 00:16:17.585 "compare": false, 00:16:17.585 "compare_and_write": false, 00:16:17.585 "abort": true, 00:16:17.585 "seek_hole": false, 00:16:17.585 "seek_data": false, 00:16:17.585 "copy": true, 00:16:17.585 "nvme_iov_md": false 00:16:17.585 }, 00:16:17.585 "memory_domains": [ 00:16:17.585 { 00:16:17.585 "dma_device_id": "system", 00:16:17.585 "dma_device_type": 1 00:16:17.585 }, 00:16:17.585 { 00:16:17.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.585 "dma_device_type": 2 00:16:17.585 } 00:16:17.585 ], 00:16:17.585 "driver_specific": {} 00:16:17.585 } 00:16:17.585 ] 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.585 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:17.586 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.586 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.586 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.586 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.586 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.586 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.586 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.586 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.586 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.846 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.846 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.846 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.846 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.846 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.846 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.846 "name": "Existed_Raid", 00:16:17.846 "uuid": "13c90058-643d-4a3c-afc1-7b0ecfa792db", 00:16:17.846 "strip_size_kb": 0, 00:16:17.846 "state": "online", 00:16:17.846 "raid_level": "raid1", 00:16:17.846 "superblock": true, 00:16:17.846 "num_base_bdevs": 2, 00:16:17.846 "num_base_bdevs_discovered": 2, 00:16:17.846 "num_base_bdevs_operational": 2, 00:16:17.846 "base_bdevs_list": [ 00:16:17.846 { 00:16:17.846 "name": "BaseBdev1", 00:16:17.846 "uuid": "258754ac-bbe7-4d25-84f5-d8bdeef117c3", 00:16:17.846 "is_configured": true, 00:16:17.846 "data_offset": 256, 00:16:17.846 "data_size": 7936 00:16:17.846 }, 00:16:17.846 { 00:16:17.846 "name": "BaseBdev2", 00:16:17.846 "uuid": "cac83802-11d6-4a6c-88f8-164702d244ea", 00:16:17.846 "is_configured": true, 00:16:17.846 "data_offset": 256, 00:16:17.846 "data_size": 7936 00:16:17.846 } 00:16:17.846 ] 00:16:17.846 }' 00:16:17.846 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.846 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.106 [2024-11-20 13:29:59.716827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:18.106 "name": "Existed_Raid", 00:16:18.106 "aliases": [ 00:16:18.106 "13c90058-643d-4a3c-afc1-7b0ecfa792db" 00:16:18.106 ], 00:16:18.106 "product_name": "Raid Volume", 00:16:18.106 "block_size": 4096, 00:16:18.106 "num_blocks": 7936, 00:16:18.106 "uuid": "13c90058-643d-4a3c-afc1-7b0ecfa792db", 00:16:18.106 "md_size": 32, 00:16:18.106 "md_interleave": false, 00:16:18.106 "dif_type": 0, 00:16:18.106 "assigned_rate_limits": { 00:16:18.106 "rw_ios_per_sec": 0, 00:16:18.106 "rw_mbytes_per_sec": 0, 00:16:18.106 "r_mbytes_per_sec": 0, 00:16:18.106 "w_mbytes_per_sec": 0 00:16:18.106 }, 00:16:18.106 "claimed": false, 00:16:18.106 "zoned": false, 00:16:18.106 "supported_io_types": { 00:16:18.106 "read": true, 00:16:18.106 "write": true, 00:16:18.106 "unmap": false, 00:16:18.106 "flush": false, 00:16:18.106 "reset": true, 00:16:18.106 "nvme_admin": false, 00:16:18.106 "nvme_io": false, 00:16:18.106 "nvme_io_md": false, 00:16:18.106 "write_zeroes": true, 00:16:18.106 "zcopy": false, 00:16:18.106 "get_zone_info": false, 00:16:18.106 "zone_management": false, 00:16:18.106 "zone_append": false, 00:16:18.106 "compare": false, 00:16:18.106 "compare_and_write": false, 00:16:18.106 "abort": false, 00:16:18.106 "seek_hole": false, 00:16:18.106 "seek_data": false, 00:16:18.106 "copy": false, 00:16:18.106 "nvme_iov_md": false 00:16:18.106 }, 00:16:18.106 "memory_domains": [ 00:16:18.106 { 00:16:18.106 "dma_device_id": "system", 00:16:18.106 "dma_device_type": 1 00:16:18.106 }, 00:16:18.106 { 00:16:18.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.106 "dma_device_type": 2 00:16:18.106 }, 00:16:18.106 { 00:16:18.106 "dma_device_id": "system", 00:16:18.106 "dma_device_type": 1 00:16:18.106 }, 00:16:18.106 { 00:16:18.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.106 "dma_device_type": 2 00:16:18.106 } 00:16:18.106 ], 00:16:18.106 "driver_specific": { 00:16:18.106 "raid": { 00:16:18.106 "uuid": "13c90058-643d-4a3c-afc1-7b0ecfa792db", 00:16:18.106 "strip_size_kb": 0, 00:16:18.106 "state": "online", 00:16:18.106 "raid_level": "raid1", 00:16:18.106 "superblock": true, 00:16:18.106 "num_base_bdevs": 2, 00:16:18.106 "num_base_bdevs_discovered": 2, 00:16:18.106 "num_base_bdevs_operational": 2, 00:16:18.106 "base_bdevs_list": [ 00:16:18.106 { 00:16:18.106 "name": "BaseBdev1", 00:16:18.106 "uuid": "258754ac-bbe7-4d25-84f5-d8bdeef117c3", 00:16:18.106 "is_configured": true, 00:16:18.106 "data_offset": 256, 00:16:18.106 "data_size": 7936 00:16:18.106 }, 00:16:18.106 { 00:16:18.106 "name": "BaseBdev2", 00:16:18.106 "uuid": "cac83802-11d6-4a6c-88f8-164702d244ea", 00:16:18.106 "is_configured": true, 00:16:18.106 "data_offset": 256, 00:16:18.106 "data_size": 7936 00:16:18.106 } 00:16:18.106 ] 00:16:18.106 } 00:16:18.106 } 00:16:18.106 }' 00:16:18.106 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:18.366 BaseBdev2' 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.366 [2024-11-20 13:29:59.944262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:18.366 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.367 13:29:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.367 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.367 "name": "Existed_Raid", 00:16:18.367 "uuid": "13c90058-643d-4a3c-afc1-7b0ecfa792db", 00:16:18.367 "strip_size_kb": 0, 00:16:18.367 "state": "online", 00:16:18.367 "raid_level": "raid1", 00:16:18.367 "superblock": true, 00:16:18.367 "num_base_bdevs": 2, 00:16:18.367 "num_base_bdevs_discovered": 1, 00:16:18.367 "num_base_bdevs_operational": 1, 00:16:18.367 "base_bdevs_list": [ 00:16:18.367 { 00:16:18.367 "name": null, 00:16:18.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.367 "is_configured": false, 00:16:18.367 "data_offset": 0, 00:16:18.367 "data_size": 7936 00:16:18.367 }, 00:16:18.367 { 00:16:18.367 "name": "BaseBdev2", 00:16:18.367 "uuid": "cac83802-11d6-4a6c-88f8-164702d244ea", 00:16:18.367 "is_configured": true, 00:16:18.367 "data_offset": 256, 00:16:18.367 "data_size": 7936 00:16:18.367 } 00:16:18.367 ] 00:16:18.367 }' 00:16:18.367 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.367 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.935 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:18.935 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:18.935 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.935 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.935 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:18.935 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 [2024-11-20 13:30:00.468236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:18.936 [2024-11-20 13:30:00.468438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.936 [2024-11-20 13:30:00.481232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.936 [2024-11-20 13:30:00.481374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.936 [2024-11-20 13:30:00.481428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97314 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 97314 ']' 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 97314 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97314 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97314' 00:16:18.936 killing process with pid 97314 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 97314 00:16:18.936 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 97314 00:16:18.936 [2024-11-20 13:30:00.577156] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.936 [2024-11-20 13:30:00.578227] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.194 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:19.194 00:16:19.194 real 0m4.018s 00:16:19.194 user 0m6.374s 00:16:19.194 sys 0m0.838s 00:16:19.194 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.194 ************************************ 00:16:19.194 END TEST raid_state_function_test_sb_md_separate 00:16:19.194 ************************************ 00:16:19.194 13:30:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.194 13:30:00 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:19.194 13:30:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:19.194 13:30:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.194 13:30:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.453 ************************************ 00:16:19.453 START TEST raid_superblock_test_md_separate 00:16:19.453 ************************************ 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:19.453 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97555 00:16:19.454 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:19.454 13:30:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97555 00:16:19.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.454 13:30:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 97555 ']' 00:16:19.454 13:30:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.454 13:30:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.454 13:30:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.454 13:30:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.454 13:30:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.454 [2024-11-20 13:30:00.957110] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:19.454 [2024-11-20 13:30:00.957344] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97555 ] 00:16:19.454 [2024-11-20 13:30:01.113508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.713 [2024-11-20 13:30:01.142853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.714 [2024-11-20 13:30:01.185527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.714 [2024-11-20 13:30:01.185647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.284 malloc1 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.284 [2024-11-20 13:30:01.824886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.284 [2024-11-20 13:30:01.824952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.284 [2024-11-20 13:30:01.824981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:20.284 [2024-11-20 13:30:01.825014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.284 [2024-11-20 13:30:01.826928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.284 [2024-11-20 13:30:01.826967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.284 pt1 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.284 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.284 malloc2 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.285 [2024-11-20 13:30:01.846078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:20.285 [2024-11-20 13:30:01.846132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.285 [2024-11-20 13:30:01.846155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:20.285 [2024-11-20 13:30:01.846170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.285 [2024-11-20 13:30:01.848063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.285 [2024-11-20 13:30:01.848105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:20.285 pt2 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.285 [2024-11-20 13:30:01.854095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:20.285 [2024-11-20 13:30:01.855914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:20.285 [2024-11-20 13:30:01.856094] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:20.285 [2024-11-20 13:30:01.856119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:20.285 [2024-11-20 13:30:01.856221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:20.285 [2024-11-20 13:30:01.856353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:20.285 [2024-11-20 13:30:01.856371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:20.285 [2024-11-20 13:30:01.856478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.285 "name": "raid_bdev1", 00:16:20.285 "uuid": "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0", 00:16:20.285 "strip_size_kb": 0, 00:16:20.285 "state": "online", 00:16:20.285 "raid_level": "raid1", 00:16:20.285 "superblock": true, 00:16:20.285 "num_base_bdevs": 2, 00:16:20.285 "num_base_bdevs_discovered": 2, 00:16:20.285 "num_base_bdevs_operational": 2, 00:16:20.285 "base_bdevs_list": [ 00:16:20.285 { 00:16:20.285 "name": "pt1", 00:16:20.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:20.285 "is_configured": true, 00:16:20.285 "data_offset": 256, 00:16:20.285 "data_size": 7936 00:16:20.285 }, 00:16:20.285 { 00:16:20.285 "name": "pt2", 00:16:20.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.285 "is_configured": true, 00:16:20.285 "data_offset": 256, 00:16:20.285 "data_size": 7936 00:16:20.285 } 00:16:20.285 ] 00:16:20.285 }' 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.285 13:30:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.855 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:20.855 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:20.855 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:20.855 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:20.855 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 [2024-11-20 13:30:02.289665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:20.856 "name": "raid_bdev1", 00:16:20.856 "aliases": [ 00:16:20.856 "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0" 00:16:20.856 ], 00:16:20.856 "product_name": "Raid Volume", 00:16:20.856 "block_size": 4096, 00:16:20.856 "num_blocks": 7936, 00:16:20.856 "uuid": "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0", 00:16:20.856 "md_size": 32, 00:16:20.856 "md_interleave": false, 00:16:20.856 "dif_type": 0, 00:16:20.856 "assigned_rate_limits": { 00:16:20.856 "rw_ios_per_sec": 0, 00:16:20.856 "rw_mbytes_per_sec": 0, 00:16:20.856 "r_mbytes_per_sec": 0, 00:16:20.856 "w_mbytes_per_sec": 0 00:16:20.856 }, 00:16:20.856 "claimed": false, 00:16:20.856 "zoned": false, 00:16:20.856 "supported_io_types": { 00:16:20.856 "read": true, 00:16:20.856 "write": true, 00:16:20.856 "unmap": false, 00:16:20.856 "flush": false, 00:16:20.856 "reset": true, 00:16:20.856 "nvme_admin": false, 00:16:20.856 "nvme_io": false, 00:16:20.856 "nvme_io_md": false, 00:16:20.856 "write_zeroes": true, 00:16:20.856 "zcopy": false, 00:16:20.856 "get_zone_info": false, 00:16:20.856 "zone_management": false, 00:16:20.856 "zone_append": false, 00:16:20.856 "compare": false, 00:16:20.856 "compare_and_write": false, 00:16:20.856 "abort": false, 00:16:20.856 "seek_hole": false, 00:16:20.856 "seek_data": false, 00:16:20.856 "copy": false, 00:16:20.856 "nvme_iov_md": false 00:16:20.856 }, 00:16:20.856 "memory_domains": [ 00:16:20.856 { 00:16:20.856 "dma_device_id": "system", 00:16:20.856 "dma_device_type": 1 00:16:20.856 }, 00:16:20.856 { 00:16:20.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.856 "dma_device_type": 2 00:16:20.856 }, 00:16:20.856 { 00:16:20.856 "dma_device_id": "system", 00:16:20.856 "dma_device_type": 1 00:16:20.856 }, 00:16:20.856 { 00:16:20.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.856 "dma_device_type": 2 00:16:20.856 } 00:16:20.856 ], 00:16:20.856 "driver_specific": { 00:16:20.856 "raid": { 00:16:20.856 "uuid": "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0", 00:16:20.856 "strip_size_kb": 0, 00:16:20.856 "state": "online", 00:16:20.856 "raid_level": "raid1", 00:16:20.856 "superblock": true, 00:16:20.856 "num_base_bdevs": 2, 00:16:20.856 "num_base_bdevs_discovered": 2, 00:16:20.856 "num_base_bdevs_operational": 2, 00:16:20.856 "base_bdevs_list": [ 00:16:20.856 { 00:16:20.856 "name": "pt1", 00:16:20.856 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:20.856 "is_configured": true, 00:16:20.856 "data_offset": 256, 00:16:20.856 "data_size": 7936 00:16:20.856 }, 00:16:20.856 { 00:16:20.856 "name": "pt2", 00:16:20.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.856 "is_configured": true, 00:16:20.856 "data_offset": 256, 00:16:20.856 "data_size": 7936 00:16:20.856 } 00:16:20.856 ] 00:16:20.856 } 00:16:20.856 } 00:16:20.856 }' 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:20.856 pt2' 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:20.856 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 [2024-11-20 13:30:02.513381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc88dab3-81c2-497e-b9b4-753c6d4ff1e0 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z bc88dab3-81c2-497e-b9b4-753c6d4ff1e0 ']' 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 [2024-11-20 13:30:02.560858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.117 [2024-11-20 13:30:02.560896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.117 [2024-11-20 13:30:02.561005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.117 [2024-11-20 13:30:02.561075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.117 [2024-11-20 13:30:02.561089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 [2024-11-20 13:30:02.692656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:21.117 [2024-11-20 13:30:02.694628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:21.117 [2024-11-20 13:30:02.694702] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:21.117 [2024-11-20 13:30:02.694757] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:21.117 [2024-11-20 13:30:02.694773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.117 [2024-11-20 13:30:02.694791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:16:21.117 request: 00:16:21.117 { 00:16:21.117 "name": "raid_bdev1", 00:16:21.117 "raid_level": "raid1", 00:16:21.117 "base_bdevs": [ 00:16:21.117 "malloc1", 00:16:21.117 "malloc2" 00:16:21.117 ], 00:16:21.117 "superblock": false, 00:16:21.117 "method": "bdev_raid_create", 00:16:21.117 "req_id": 1 00:16:21.117 } 00:16:21.117 Got JSON-RPC error response 00:16:21.117 response: 00:16:21.117 { 00:16:21.117 "code": -17, 00:16:21.117 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:21.117 } 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:21.117 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.118 [2024-11-20 13:30:02.756522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.118 [2024-11-20 13:30:02.756612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.118 [2024-11-20 13:30:02.756638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:21.118 [2024-11-20 13:30:02.756648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.118 [2024-11-20 13:30:02.758879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.118 [2024-11-20 13:30:02.758919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.118 [2024-11-20 13:30:02.758981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:21.118 [2024-11-20 13:30:02.759047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.118 pt1 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.118 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.378 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.378 "name": "raid_bdev1", 00:16:21.378 "uuid": "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0", 00:16:21.378 "strip_size_kb": 0, 00:16:21.378 "state": "configuring", 00:16:21.378 "raid_level": "raid1", 00:16:21.378 "superblock": true, 00:16:21.378 "num_base_bdevs": 2, 00:16:21.378 "num_base_bdevs_discovered": 1, 00:16:21.378 "num_base_bdevs_operational": 2, 00:16:21.378 "base_bdevs_list": [ 00:16:21.378 { 00:16:21.378 "name": "pt1", 00:16:21.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.378 "is_configured": true, 00:16:21.378 "data_offset": 256, 00:16:21.378 "data_size": 7936 00:16:21.378 }, 00:16:21.378 { 00:16:21.378 "name": null, 00:16:21.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.378 "is_configured": false, 00:16:21.378 "data_offset": 256, 00:16:21.378 "data_size": 7936 00:16:21.378 } 00:16:21.378 ] 00:16:21.378 }' 00:16:21.378 13:30:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.378 13:30:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.637 [2024-11-20 13:30:03.203774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:21.637 [2024-11-20 13:30:03.203856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.637 [2024-11-20 13:30:03.203878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:21.637 [2024-11-20 13:30:03.203888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.637 [2024-11-20 13:30:03.204127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.637 [2024-11-20 13:30:03.204150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:21.637 [2024-11-20 13:30:03.204212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:21.637 [2024-11-20 13:30:03.204233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.637 [2024-11-20 13:30:03.204323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:21.637 [2024-11-20 13:30:03.204339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:21.637 [2024-11-20 13:30:03.204422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:21.637 [2024-11-20 13:30:03.204515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:21.637 [2024-11-20 13:30:03.204534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:16:21.637 [2024-11-20 13:30:03.204606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.637 pt2 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.637 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.638 "name": "raid_bdev1", 00:16:21.638 "uuid": "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0", 00:16:21.638 "strip_size_kb": 0, 00:16:21.638 "state": "online", 00:16:21.638 "raid_level": "raid1", 00:16:21.638 "superblock": true, 00:16:21.638 "num_base_bdevs": 2, 00:16:21.638 "num_base_bdevs_discovered": 2, 00:16:21.638 "num_base_bdevs_operational": 2, 00:16:21.638 "base_bdevs_list": [ 00:16:21.638 { 00:16:21.638 "name": "pt1", 00:16:21.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.638 "is_configured": true, 00:16:21.638 "data_offset": 256, 00:16:21.638 "data_size": 7936 00:16:21.638 }, 00:16:21.638 { 00:16:21.638 "name": "pt2", 00:16:21.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.638 "is_configured": true, 00:16:21.638 "data_offset": 256, 00:16:21.638 "data_size": 7936 00:16:21.638 } 00:16:21.638 ] 00:16:21.638 }' 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.638 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.207 [2024-11-20 13:30:03.675351] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:22.207 "name": "raid_bdev1", 00:16:22.207 "aliases": [ 00:16:22.207 "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0" 00:16:22.207 ], 00:16:22.207 "product_name": "Raid Volume", 00:16:22.207 "block_size": 4096, 00:16:22.207 "num_blocks": 7936, 00:16:22.207 "uuid": "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0", 00:16:22.207 "md_size": 32, 00:16:22.207 "md_interleave": false, 00:16:22.207 "dif_type": 0, 00:16:22.207 "assigned_rate_limits": { 00:16:22.207 "rw_ios_per_sec": 0, 00:16:22.207 "rw_mbytes_per_sec": 0, 00:16:22.207 "r_mbytes_per_sec": 0, 00:16:22.207 "w_mbytes_per_sec": 0 00:16:22.207 }, 00:16:22.207 "claimed": false, 00:16:22.207 "zoned": false, 00:16:22.207 "supported_io_types": { 00:16:22.207 "read": true, 00:16:22.207 "write": true, 00:16:22.207 "unmap": false, 00:16:22.207 "flush": false, 00:16:22.207 "reset": true, 00:16:22.207 "nvme_admin": false, 00:16:22.207 "nvme_io": false, 00:16:22.207 "nvme_io_md": false, 00:16:22.207 "write_zeroes": true, 00:16:22.207 "zcopy": false, 00:16:22.207 "get_zone_info": false, 00:16:22.207 "zone_management": false, 00:16:22.207 "zone_append": false, 00:16:22.207 "compare": false, 00:16:22.207 "compare_and_write": false, 00:16:22.207 "abort": false, 00:16:22.207 "seek_hole": false, 00:16:22.207 "seek_data": false, 00:16:22.207 "copy": false, 00:16:22.207 "nvme_iov_md": false 00:16:22.207 }, 00:16:22.207 "memory_domains": [ 00:16:22.207 { 00:16:22.207 "dma_device_id": "system", 00:16:22.207 "dma_device_type": 1 00:16:22.207 }, 00:16:22.207 { 00:16:22.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.207 "dma_device_type": 2 00:16:22.207 }, 00:16:22.207 { 00:16:22.207 "dma_device_id": "system", 00:16:22.207 "dma_device_type": 1 00:16:22.207 }, 00:16:22.207 { 00:16:22.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.207 "dma_device_type": 2 00:16:22.207 } 00:16:22.207 ], 00:16:22.207 "driver_specific": { 00:16:22.207 "raid": { 00:16:22.207 "uuid": "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0", 00:16:22.207 "strip_size_kb": 0, 00:16:22.207 "state": "online", 00:16:22.207 "raid_level": "raid1", 00:16:22.207 "superblock": true, 00:16:22.207 "num_base_bdevs": 2, 00:16:22.207 "num_base_bdevs_discovered": 2, 00:16:22.207 "num_base_bdevs_operational": 2, 00:16:22.207 "base_bdevs_list": [ 00:16:22.207 { 00:16:22.207 "name": "pt1", 00:16:22.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.207 "is_configured": true, 00:16:22.207 "data_offset": 256, 00:16:22.207 "data_size": 7936 00:16:22.207 }, 00:16:22.207 { 00:16:22.207 "name": "pt2", 00:16:22.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.207 "is_configured": true, 00:16:22.207 "data_offset": 256, 00:16:22.207 "data_size": 7936 00:16:22.207 } 00:16:22.207 ] 00:16:22.207 } 00:16:22.207 } 00:16:22.207 }' 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:22.207 pt2' 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.207 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:22.467 [2024-11-20 13:30:03.895014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' bc88dab3-81c2-497e-b9b4-753c6d4ff1e0 '!=' bc88dab3-81c2-497e-b9b4-753c6d4ff1e0 ']' 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.467 [2024-11-20 13:30:03.942656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.467 "name": "raid_bdev1", 00:16:22.467 "uuid": "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0", 00:16:22.467 "strip_size_kb": 0, 00:16:22.467 "state": "online", 00:16:22.467 "raid_level": "raid1", 00:16:22.467 "superblock": true, 00:16:22.467 "num_base_bdevs": 2, 00:16:22.467 "num_base_bdevs_discovered": 1, 00:16:22.467 "num_base_bdevs_operational": 1, 00:16:22.467 "base_bdevs_list": [ 00:16:22.467 { 00:16:22.467 "name": null, 00:16:22.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.467 "is_configured": false, 00:16:22.467 "data_offset": 0, 00:16:22.467 "data_size": 7936 00:16:22.467 }, 00:16:22.467 { 00:16:22.467 "name": "pt2", 00:16:22.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.467 "is_configured": true, 00:16:22.467 "data_offset": 256, 00:16:22.467 "data_size": 7936 00:16:22.467 } 00:16:22.467 ] 00:16:22.467 }' 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.467 13:30:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.726 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:22.726 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.726 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.726 [2024-11-20 13:30:04.373861] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.726 [2024-11-20 13:30:04.373903] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.726 [2024-11-20 13:30:04.374005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.726 [2024-11-20 13:30:04.374061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.726 [2024-11-20 13:30:04.374070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:16:22.726 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.726 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.726 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.726 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.726 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.986 [2024-11-20 13:30:04.449708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.986 [2024-11-20 13:30:04.449786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.986 [2024-11-20 13:30:04.449808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:22.986 [2024-11-20 13:30:04.449818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.986 [2024-11-20 13:30:04.451960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.986 [2024-11-20 13:30:04.452008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.986 [2024-11-20 13:30:04.452072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:22.986 [2024-11-20 13:30:04.452110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.986 [2024-11-20 13:30:04.452192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:22.986 [2024-11-20 13:30:04.452201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:22.986 [2024-11-20 13:30:04.452272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:22.986 [2024-11-20 13:30:04.452387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:22.986 [2024-11-20 13:30:04.452402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:16:22.986 [2024-11-20 13:30:04.452488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.986 pt2 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.986 "name": "raid_bdev1", 00:16:22.986 "uuid": "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0", 00:16:22.986 "strip_size_kb": 0, 00:16:22.986 "state": "online", 00:16:22.986 "raid_level": "raid1", 00:16:22.986 "superblock": true, 00:16:22.986 "num_base_bdevs": 2, 00:16:22.986 "num_base_bdevs_discovered": 1, 00:16:22.986 "num_base_bdevs_operational": 1, 00:16:22.986 "base_bdevs_list": [ 00:16:22.986 { 00:16:22.986 "name": null, 00:16:22.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.986 "is_configured": false, 00:16:22.986 "data_offset": 256, 00:16:22.986 "data_size": 7936 00:16:22.986 }, 00:16:22.986 { 00:16:22.986 "name": "pt2", 00:16:22.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.986 "is_configured": true, 00:16:22.986 "data_offset": 256, 00:16:22.986 "data_size": 7936 00:16:22.986 } 00:16:22.986 ] 00:16:22.986 }' 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.986 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.555 [2024-11-20 13:30:04.920926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.555 [2024-11-20 13:30:04.920970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.555 [2024-11-20 13:30:04.921086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.555 [2024-11-20 13:30:04.921145] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.555 [2024-11-20 13:30:04.921171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.555 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.555 [2024-11-20 13:30:04.976876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:23.555 [2024-11-20 13:30:04.976960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.555 [2024-11-20 13:30:04.976984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:23.555 [2024-11-20 13:30:04.977011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.555 [2024-11-20 13:30:04.979101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.555 [2024-11-20 13:30:04.979140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:23.556 [2024-11-20 13:30:04.979203] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:23.556 [2024-11-20 13:30:04.979239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:23.556 [2024-11-20 13:30:04.979381] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:23.556 [2024-11-20 13:30:04.979406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.556 [2024-11-20 13:30:04.979431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:16:23.556 [2024-11-20 13:30:04.979489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.556 [2024-11-20 13:30:04.979559] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:16:23.556 [2024-11-20 13:30:04.979576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:23.556 [2024-11-20 13:30:04.979643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:23.556 [2024-11-20 13:30:04.979729] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:16:23.556 [2024-11-20 13:30:04.979741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:16:23.556 [2024-11-20 13:30:04.979827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.556 pt1 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.556 13:30:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.556 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.556 13:30:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.556 "name": "raid_bdev1", 00:16:23.556 "uuid": "bc88dab3-81c2-497e-b9b4-753c6d4ff1e0", 00:16:23.556 "strip_size_kb": 0, 00:16:23.556 "state": "online", 00:16:23.556 "raid_level": "raid1", 00:16:23.556 "superblock": true, 00:16:23.556 "num_base_bdevs": 2, 00:16:23.556 "num_base_bdevs_discovered": 1, 00:16:23.556 "num_base_bdevs_operational": 1, 00:16:23.556 "base_bdevs_list": [ 00:16:23.556 { 00:16:23.556 "name": null, 00:16:23.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.556 "is_configured": false, 00:16:23.556 "data_offset": 256, 00:16:23.556 "data_size": 7936 00:16:23.556 }, 00:16:23.556 { 00:16:23.556 "name": "pt2", 00:16:23.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.556 "is_configured": true, 00:16:23.556 "data_offset": 256, 00:16:23.556 "data_size": 7936 00:16:23.556 } 00:16:23.556 ] 00:16:23.556 }' 00:16:23.556 13:30:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.556 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.819 13:30:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:23.819 13:30:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:23.819 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.819 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.819 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.819 13:30:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:23.819 13:30:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.819 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.819 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.819 13:30:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:23.819 [2024-11-20 13:30:05.480287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' bc88dab3-81c2-497e-b9b4-753c6d4ff1e0 '!=' bc88dab3-81c2-497e-b9b4-753c6d4ff1e0 ']' 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97555 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 97555 ']' 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 97555 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97555 00:16:24.077 killing process with pid 97555 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97555' 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 97555 00:16:24.077 [2024-11-20 13:30:05.546822] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.077 [2024-11-20 13:30:05.546926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.077 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 97555 00:16:24.077 [2024-11-20 13:30:05.546984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.077 [2024-11-20 13:30:05.547009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:16:24.077 [2024-11-20 13:30:05.572507] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.336 ************************************ 00:16:24.336 END TEST raid_superblock_test_md_separate 00:16:24.336 ************************************ 00:16:24.336 13:30:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:24.336 00:16:24.336 real 0m4.922s 00:16:24.336 user 0m8.038s 00:16:24.336 sys 0m1.103s 00:16:24.336 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.336 13:30:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.336 13:30:05 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:24.336 13:30:05 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:24.336 13:30:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:24.336 13:30:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.336 13:30:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.336 ************************************ 00:16:24.336 START TEST raid_rebuild_test_sb_md_separate 00:16:24.336 ************************************ 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=97867 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 97867 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 97867 ']' 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.336 13:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.336 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:24.336 Zero copy mechanism will not be used. 00:16:24.336 [2024-11-20 13:30:05.951896] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:24.336 [2024-11-20 13:30:05.952038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97867 ] 00:16:24.594 [2024-11-20 13:30:06.105256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.594 [2024-11-20 13:30:06.133745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.594 [2024-11-20 13:30:06.176898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.594 [2024-11-20 13:30:06.176942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.161 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.161 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:16:25.161 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.161 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:25.161 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.161 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.161 BaseBdev1_malloc 00:16:25.161 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.161 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:25.161 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.161 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.421 [2024-11-20 13:30:06.829055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:25.421 [2024-11-20 13:30:06.829162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.421 [2024-11-20 13:30:06.829206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:25.421 [2024-11-20 13:30:06.829226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.421 [2024-11-20 13:30:06.831528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.421 [2024-11-20 13:30:06.831578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:25.421 BaseBdev1 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.421 BaseBdev2_malloc 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.421 [2024-11-20 13:30:06.858870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:25.421 [2024-11-20 13:30:06.858939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.421 [2024-11-20 13:30:06.858964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:25.421 [2024-11-20 13:30:06.858973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.421 [2024-11-20 13:30:06.861258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.421 [2024-11-20 13:30:06.861292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:25.421 BaseBdev2 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.421 spare_malloc 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.421 spare_delay 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.421 [2024-11-20 13:30:06.911110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:25.421 [2024-11-20 13:30:06.911227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.421 [2024-11-20 13:30:06.911253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:25.421 [2024-11-20 13:30:06.911262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.421 [2024-11-20 13:30:06.913237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.421 [2024-11-20 13:30:06.913273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:25.421 spare 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.421 [2024-11-20 13:30:06.923135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.421 [2024-11-20 13:30:06.924902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.421 [2024-11-20 13:30:06.925060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:25.421 [2024-11-20 13:30:06.925074] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:25.421 [2024-11-20 13:30:06.925164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:25.421 [2024-11-20 13:30:06.925269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:25.421 [2024-11-20 13:30:06.925280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:25.421 [2024-11-20 13:30:06.925367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.421 "name": "raid_bdev1", 00:16:25.421 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:25.421 "strip_size_kb": 0, 00:16:25.421 "state": "online", 00:16:25.421 "raid_level": "raid1", 00:16:25.421 "superblock": true, 00:16:25.421 "num_base_bdevs": 2, 00:16:25.421 "num_base_bdevs_discovered": 2, 00:16:25.421 "num_base_bdevs_operational": 2, 00:16:25.421 "base_bdevs_list": [ 00:16:25.421 { 00:16:25.421 "name": "BaseBdev1", 00:16:25.421 "uuid": "8225b54b-ffd7-55cd-b46a-f097e74ce17f", 00:16:25.421 "is_configured": true, 00:16:25.421 "data_offset": 256, 00:16:25.421 "data_size": 7936 00:16:25.421 }, 00:16:25.421 { 00:16:25.421 "name": "BaseBdev2", 00:16:25.421 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:25.421 "is_configured": true, 00:16:25.421 "data_offset": 256, 00:16:25.421 "data_size": 7936 00:16:25.421 } 00:16:25.421 ] 00:16:25.421 }' 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.421 13:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.991 [2024-11-20 13:30:07.410656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:25.991 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:26.251 [2024-11-20 13:30:07.733822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:26.251 /dev/nbd0 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.251 1+0 records in 00:16:26.251 1+0 records out 00:16:26.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411006 s, 10.0 MB/s 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:26.251 13:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:26.820 7936+0 records in 00:16:26.820 7936+0 records out 00:16:26.820 32505856 bytes (33 MB, 31 MiB) copied, 0.604352 s, 53.8 MB/s 00:16:26.820 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:26.820 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.820 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:26.820 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:26.820 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:26.820 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.820 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:27.081 [2024-11-20 13:30:08.676152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.081 [2024-11-20 13:30:08.705810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.081 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.340 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.340 "name": "raid_bdev1", 00:16:27.340 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:27.340 "strip_size_kb": 0, 00:16:27.340 "state": "online", 00:16:27.340 "raid_level": "raid1", 00:16:27.340 "superblock": true, 00:16:27.340 "num_base_bdevs": 2, 00:16:27.340 "num_base_bdevs_discovered": 1, 00:16:27.340 "num_base_bdevs_operational": 1, 00:16:27.340 "base_bdevs_list": [ 00:16:27.340 { 00:16:27.340 "name": null, 00:16:27.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.340 "is_configured": false, 00:16:27.340 "data_offset": 0, 00:16:27.340 "data_size": 7936 00:16:27.340 }, 00:16:27.340 { 00:16:27.340 "name": "BaseBdev2", 00:16:27.340 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:27.340 "is_configured": true, 00:16:27.340 "data_offset": 256, 00:16:27.340 "data_size": 7936 00:16:27.340 } 00:16:27.340 ] 00:16:27.340 }' 00:16:27.340 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.340 13:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.599 13:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.599 13:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.599 13:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.599 [2024-11-20 13:30:09.201002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.599 [2024-11-20 13:30:09.203631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:16:27.599 [2024-11-20 13:30:09.205728] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.599 13:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.599 13:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.979 "name": "raid_bdev1", 00:16:28.979 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:28.979 "strip_size_kb": 0, 00:16:28.979 "state": "online", 00:16:28.979 "raid_level": "raid1", 00:16:28.979 "superblock": true, 00:16:28.979 "num_base_bdevs": 2, 00:16:28.979 "num_base_bdevs_discovered": 2, 00:16:28.979 "num_base_bdevs_operational": 2, 00:16:28.979 "process": { 00:16:28.979 "type": "rebuild", 00:16:28.979 "target": "spare", 00:16:28.979 "progress": { 00:16:28.979 "blocks": 2560, 00:16:28.979 "percent": 32 00:16:28.979 } 00:16:28.979 }, 00:16:28.979 "base_bdevs_list": [ 00:16:28.979 { 00:16:28.979 "name": "spare", 00:16:28.979 "uuid": "88af9d8e-867a-5775-b67f-547cd8fa4c72", 00:16:28.979 "is_configured": true, 00:16:28.979 "data_offset": 256, 00:16:28.979 "data_size": 7936 00:16:28.979 }, 00:16:28.979 { 00:16:28.979 "name": "BaseBdev2", 00:16:28.979 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:28.979 "is_configured": true, 00:16:28.979 "data_offset": 256, 00:16:28.979 "data_size": 7936 00:16:28.979 } 00:16:28.979 ] 00:16:28.979 }' 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.979 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.980 [2024-11-20 13:30:10.368633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.980 [2024-11-20 13:30:10.411643] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:28.980 [2024-11-20 13:30:10.411739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.980 [2024-11-20 13:30:10.411760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.980 [2024-11-20 13:30:10.411768] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.980 "name": "raid_bdev1", 00:16:28.980 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:28.980 "strip_size_kb": 0, 00:16:28.980 "state": "online", 00:16:28.980 "raid_level": "raid1", 00:16:28.980 "superblock": true, 00:16:28.980 "num_base_bdevs": 2, 00:16:28.980 "num_base_bdevs_discovered": 1, 00:16:28.980 "num_base_bdevs_operational": 1, 00:16:28.980 "base_bdevs_list": [ 00:16:28.980 { 00:16:28.980 "name": null, 00:16:28.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.980 "is_configured": false, 00:16:28.980 "data_offset": 0, 00:16:28.980 "data_size": 7936 00:16:28.980 }, 00:16:28.980 { 00:16:28.980 "name": "BaseBdev2", 00:16:28.980 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:28.980 "is_configured": true, 00:16:28.980 "data_offset": 256, 00:16:28.980 "data_size": 7936 00:16:28.980 } 00:16:28.980 ] 00:16:28.980 }' 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.980 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.240 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.240 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.240 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.240 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.240 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.240 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.240 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.240 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.240 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.240 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.240 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.240 "name": "raid_bdev1", 00:16:29.240 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:29.240 "strip_size_kb": 0, 00:16:29.240 "state": "online", 00:16:29.240 "raid_level": "raid1", 00:16:29.240 "superblock": true, 00:16:29.240 "num_base_bdevs": 2, 00:16:29.240 "num_base_bdevs_discovered": 1, 00:16:29.240 "num_base_bdevs_operational": 1, 00:16:29.240 "base_bdevs_list": [ 00:16:29.240 { 00:16:29.240 "name": null, 00:16:29.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.241 "is_configured": false, 00:16:29.241 "data_offset": 0, 00:16:29.241 "data_size": 7936 00:16:29.241 }, 00:16:29.241 { 00:16:29.241 "name": "BaseBdev2", 00:16:29.241 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:29.241 "is_configured": true, 00:16:29.241 "data_offset": 256, 00:16:29.241 "data_size": 7936 00:16:29.241 } 00:16:29.241 ] 00:16:29.241 }' 00:16:29.241 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.500 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.500 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.500 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.500 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:29.500 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.500 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.500 [2024-11-20 13:30:10.974443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.500 [2024-11-20 13:30:10.977246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:16:29.500 [2024-11-20 13:30:10.979507] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.500 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.500 13:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:30.438 13:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.438 13:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.438 13:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.438 13:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.438 13:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.438 13:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.438 13:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.438 13:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.438 13:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.438 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.438 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.438 "name": "raid_bdev1", 00:16:30.439 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:30.439 "strip_size_kb": 0, 00:16:30.439 "state": "online", 00:16:30.439 "raid_level": "raid1", 00:16:30.439 "superblock": true, 00:16:30.439 "num_base_bdevs": 2, 00:16:30.439 "num_base_bdevs_discovered": 2, 00:16:30.439 "num_base_bdevs_operational": 2, 00:16:30.439 "process": { 00:16:30.439 "type": "rebuild", 00:16:30.439 "target": "spare", 00:16:30.439 "progress": { 00:16:30.439 "blocks": 2560, 00:16:30.439 "percent": 32 00:16:30.439 } 00:16:30.439 }, 00:16:30.439 "base_bdevs_list": [ 00:16:30.439 { 00:16:30.439 "name": "spare", 00:16:30.439 "uuid": "88af9d8e-867a-5775-b67f-547cd8fa4c72", 00:16:30.439 "is_configured": true, 00:16:30.439 "data_offset": 256, 00:16:30.439 "data_size": 7936 00:16:30.439 }, 00:16:30.439 { 00:16:30.439 "name": "BaseBdev2", 00:16:30.439 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:30.439 "is_configured": true, 00:16:30.439 "data_offset": 256, 00:16:30.439 "data_size": 7936 00:16:30.439 } 00:16:30.439 ] 00:16:30.439 }' 00:16:30.439 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.439 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.439 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:30.698 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=601 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.698 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.698 "name": "raid_bdev1", 00:16:30.698 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:30.698 "strip_size_kb": 0, 00:16:30.698 "state": "online", 00:16:30.698 "raid_level": "raid1", 00:16:30.698 "superblock": true, 00:16:30.698 "num_base_bdevs": 2, 00:16:30.698 "num_base_bdevs_discovered": 2, 00:16:30.698 "num_base_bdevs_operational": 2, 00:16:30.698 "process": { 00:16:30.698 "type": "rebuild", 00:16:30.698 "target": "spare", 00:16:30.698 "progress": { 00:16:30.698 "blocks": 2816, 00:16:30.698 "percent": 35 00:16:30.698 } 00:16:30.698 }, 00:16:30.698 "base_bdevs_list": [ 00:16:30.698 { 00:16:30.698 "name": "spare", 00:16:30.698 "uuid": "88af9d8e-867a-5775-b67f-547cd8fa4c72", 00:16:30.698 "is_configured": true, 00:16:30.698 "data_offset": 256, 00:16:30.699 "data_size": 7936 00:16:30.699 }, 00:16:30.699 { 00:16:30.699 "name": "BaseBdev2", 00:16:30.699 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:30.699 "is_configured": true, 00:16:30.699 "data_offset": 256, 00:16:30.699 "data_size": 7936 00:16:30.699 } 00:16:30.699 ] 00:16:30.699 }' 00:16:30.699 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.699 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.699 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.699 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.699 13:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.637 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.637 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.637 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.637 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.637 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.637 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.637 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.637 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.637 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.637 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.637 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.897 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.897 "name": "raid_bdev1", 00:16:31.897 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:31.897 "strip_size_kb": 0, 00:16:31.897 "state": "online", 00:16:31.897 "raid_level": "raid1", 00:16:31.897 "superblock": true, 00:16:31.897 "num_base_bdevs": 2, 00:16:31.897 "num_base_bdevs_discovered": 2, 00:16:31.897 "num_base_bdevs_operational": 2, 00:16:31.897 "process": { 00:16:31.897 "type": "rebuild", 00:16:31.897 "target": "spare", 00:16:31.897 "progress": { 00:16:31.897 "blocks": 5632, 00:16:31.897 "percent": 70 00:16:31.897 } 00:16:31.897 }, 00:16:31.897 "base_bdevs_list": [ 00:16:31.897 { 00:16:31.897 "name": "spare", 00:16:31.897 "uuid": "88af9d8e-867a-5775-b67f-547cd8fa4c72", 00:16:31.897 "is_configured": true, 00:16:31.897 "data_offset": 256, 00:16:31.897 "data_size": 7936 00:16:31.897 }, 00:16:31.897 { 00:16:31.897 "name": "BaseBdev2", 00:16:31.897 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:31.897 "is_configured": true, 00:16:31.897 "data_offset": 256, 00:16:31.897 "data_size": 7936 00:16:31.897 } 00:16:31.897 ] 00:16:31.897 }' 00:16:31.897 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.897 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.897 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.897 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.897 13:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.465 [2024-11-20 13:30:14.093594] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:32.465 [2024-11-20 13:30:14.093711] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:32.465 [2024-11-20 13:30:14.093857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.038 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.038 "name": "raid_bdev1", 00:16:33.038 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:33.038 "strip_size_kb": 0, 00:16:33.038 "state": "online", 00:16:33.038 "raid_level": "raid1", 00:16:33.038 "superblock": true, 00:16:33.038 "num_base_bdevs": 2, 00:16:33.038 "num_base_bdevs_discovered": 2, 00:16:33.038 "num_base_bdevs_operational": 2, 00:16:33.038 "base_bdevs_list": [ 00:16:33.038 { 00:16:33.038 "name": "spare", 00:16:33.038 "uuid": "88af9d8e-867a-5775-b67f-547cd8fa4c72", 00:16:33.038 "is_configured": true, 00:16:33.038 "data_offset": 256, 00:16:33.038 "data_size": 7936 00:16:33.038 }, 00:16:33.038 { 00:16:33.038 "name": "BaseBdev2", 00:16:33.038 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:33.038 "is_configured": true, 00:16:33.038 "data_offset": 256, 00:16:33.038 "data_size": 7936 00:16:33.039 } 00:16:33.039 ] 00:16:33.039 }' 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.039 "name": "raid_bdev1", 00:16:33.039 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:33.039 "strip_size_kb": 0, 00:16:33.039 "state": "online", 00:16:33.039 "raid_level": "raid1", 00:16:33.039 "superblock": true, 00:16:33.039 "num_base_bdevs": 2, 00:16:33.039 "num_base_bdevs_discovered": 2, 00:16:33.039 "num_base_bdevs_operational": 2, 00:16:33.039 "base_bdevs_list": [ 00:16:33.039 { 00:16:33.039 "name": "spare", 00:16:33.039 "uuid": "88af9d8e-867a-5775-b67f-547cd8fa4c72", 00:16:33.039 "is_configured": true, 00:16:33.039 "data_offset": 256, 00:16:33.039 "data_size": 7936 00:16:33.039 }, 00:16:33.039 { 00:16:33.039 "name": "BaseBdev2", 00:16:33.039 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:33.039 "is_configured": true, 00:16:33.039 "data_offset": 256, 00:16:33.039 "data_size": 7936 00:16:33.039 } 00:16:33.039 ] 00:16:33.039 }' 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.039 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.308 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.308 "name": "raid_bdev1", 00:16:33.308 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:33.308 "strip_size_kb": 0, 00:16:33.308 "state": "online", 00:16:33.308 "raid_level": "raid1", 00:16:33.308 "superblock": true, 00:16:33.308 "num_base_bdevs": 2, 00:16:33.308 "num_base_bdevs_discovered": 2, 00:16:33.308 "num_base_bdevs_operational": 2, 00:16:33.308 "base_bdevs_list": [ 00:16:33.308 { 00:16:33.308 "name": "spare", 00:16:33.308 "uuid": "88af9d8e-867a-5775-b67f-547cd8fa4c72", 00:16:33.308 "is_configured": true, 00:16:33.308 "data_offset": 256, 00:16:33.308 "data_size": 7936 00:16:33.308 }, 00:16:33.308 { 00:16:33.308 "name": "BaseBdev2", 00:16:33.308 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:33.308 "is_configured": true, 00:16:33.308 "data_offset": 256, 00:16:33.308 "data_size": 7936 00:16:33.308 } 00:16:33.308 ] 00:16:33.308 }' 00:16:33.308 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.308 13:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.567 [2024-11-20 13:30:15.155611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.567 [2024-11-20 13:30:15.155650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.567 [2024-11-20 13:30:15.155759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.567 [2024-11-20 13:30:15.155832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.567 [2024-11-20 13:30:15.155852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.567 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:33.826 /dev/nbd0 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.826 1+0 records in 00:16:33.826 1+0 records out 00:16:33.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309688 s, 13.2 MB/s 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.826 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:34.086 /dev/nbd1 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.086 1+0 records in 00:16:34.086 1+0 records out 00:16:34.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290062 s, 14.1 MB/s 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.086 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:34.345 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:34.345 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.345 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.345 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:34.345 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:34.345 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.345 13:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:34.604 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:34.604 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:34.604 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:34.604 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.604 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.604 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:34.604 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:34.604 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.604 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.604 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:34.863 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:34.863 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:34.863 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:34.863 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.863 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.863 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:34.863 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:34.863 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.863 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.864 [2024-11-20 13:30:16.307560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.864 [2024-11-20 13:30:16.307628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.864 [2024-11-20 13:30:16.307649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:34.864 [2024-11-20 13:30:16.307663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.864 [2024-11-20 13:30:16.309738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.864 [2024-11-20 13:30:16.309814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.864 [2024-11-20 13:30:16.309911] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:34.864 [2024-11-20 13:30:16.309998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.864 [2024-11-20 13:30:16.310147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.864 spare 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.864 [2024-11-20 13:30:16.410099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:34.864 [2024-11-20 13:30:16.410232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:34.864 [2024-11-20 13:30:16.410436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:16:34.864 [2024-11-20 13:30:16.410622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:34.864 [2024-11-20 13:30:16.410643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:34.864 [2024-11-20 13:30:16.410775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.864 "name": "raid_bdev1", 00:16:34.864 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:34.864 "strip_size_kb": 0, 00:16:34.864 "state": "online", 00:16:34.864 "raid_level": "raid1", 00:16:34.864 "superblock": true, 00:16:34.864 "num_base_bdevs": 2, 00:16:34.864 "num_base_bdevs_discovered": 2, 00:16:34.864 "num_base_bdevs_operational": 2, 00:16:34.864 "base_bdevs_list": [ 00:16:34.864 { 00:16:34.864 "name": "spare", 00:16:34.864 "uuid": "88af9d8e-867a-5775-b67f-547cd8fa4c72", 00:16:34.864 "is_configured": true, 00:16:34.864 "data_offset": 256, 00:16:34.864 "data_size": 7936 00:16:34.864 }, 00:16:34.864 { 00:16:34.864 "name": "BaseBdev2", 00:16:34.864 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:34.864 "is_configured": true, 00:16:34.864 "data_offset": 256, 00:16:34.864 "data_size": 7936 00:16:34.864 } 00:16:34.864 ] 00:16:34.864 }' 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.864 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.432 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.432 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.432 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.432 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.432 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.432 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.432 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.432 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.432 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.432 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.432 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.432 "name": "raid_bdev1", 00:16:35.432 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:35.432 "strip_size_kb": 0, 00:16:35.432 "state": "online", 00:16:35.432 "raid_level": "raid1", 00:16:35.432 "superblock": true, 00:16:35.432 "num_base_bdevs": 2, 00:16:35.432 "num_base_bdevs_discovered": 2, 00:16:35.433 "num_base_bdevs_operational": 2, 00:16:35.433 "base_bdevs_list": [ 00:16:35.433 { 00:16:35.433 "name": "spare", 00:16:35.433 "uuid": "88af9d8e-867a-5775-b67f-547cd8fa4c72", 00:16:35.433 "is_configured": true, 00:16:35.433 "data_offset": 256, 00:16:35.433 "data_size": 7936 00:16:35.433 }, 00:16:35.433 { 00:16:35.433 "name": "BaseBdev2", 00:16:35.433 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:35.433 "is_configured": true, 00:16:35.433 "data_offset": 256, 00:16:35.433 "data_size": 7936 00:16:35.433 } 00:16:35.433 ] 00:16:35.433 }' 00:16:35.433 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.433 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.433 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.433 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.433 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.433 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:35.433 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.433 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.433 13:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.433 [2024-11-20 13:30:17.030452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.433 "name": "raid_bdev1", 00:16:35.433 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:35.433 "strip_size_kb": 0, 00:16:35.433 "state": "online", 00:16:35.433 "raid_level": "raid1", 00:16:35.433 "superblock": true, 00:16:35.433 "num_base_bdevs": 2, 00:16:35.433 "num_base_bdevs_discovered": 1, 00:16:35.433 "num_base_bdevs_operational": 1, 00:16:35.433 "base_bdevs_list": [ 00:16:35.433 { 00:16:35.433 "name": null, 00:16:35.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.433 "is_configured": false, 00:16:35.433 "data_offset": 0, 00:16:35.433 "data_size": 7936 00:16:35.433 }, 00:16:35.433 { 00:16:35.433 "name": "BaseBdev2", 00:16:35.433 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:35.433 "is_configured": true, 00:16:35.433 "data_offset": 256, 00:16:35.433 "data_size": 7936 00:16:35.433 } 00:16:35.433 ] 00:16:35.433 }' 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.433 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.017 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:36.017 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.017 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.017 [2024-11-20 13:30:17.505715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.017 [2024-11-20 13:30:17.505929] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:36.017 [2024-11-20 13:30:17.505944] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:36.017 [2024-11-20 13:30:17.506030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.017 [2024-11-20 13:30:17.508602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:16:36.017 [2024-11-20 13:30:17.510663] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.017 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.017 13:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:36.957 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.957 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.957 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.957 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.957 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.957 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.957 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.957 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.957 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.957 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.957 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.957 "name": "raid_bdev1", 00:16:36.957 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:36.957 "strip_size_kb": 0, 00:16:36.957 "state": "online", 00:16:36.957 "raid_level": "raid1", 00:16:36.957 "superblock": true, 00:16:36.957 "num_base_bdevs": 2, 00:16:36.957 "num_base_bdevs_discovered": 2, 00:16:36.957 "num_base_bdevs_operational": 2, 00:16:36.957 "process": { 00:16:36.957 "type": "rebuild", 00:16:36.957 "target": "spare", 00:16:36.957 "progress": { 00:16:36.957 "blocks": 2560, 00:16:36.957 "percent": 32 00:16:36.957 } 00:16:36.957 }, 00:16:36.957 "base_bdevs_list": [ 00:16:36.957 { 00:16:36.957 "name": "spare", 00:16:36.957 "uuid": "88af9d8e-867a-5775-b67f-547cd8fa4c72", 00:16:36.957 "is_configured": true, 00:16:36.957 "data_offset": 256, 00:16:36.957 "data_size": 7936 00:16:36.957 }, 00:16:36.957 { 00:16:36.957 "name": "BaseBdev2", 00:16:36.957 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:36.957 "is_configured": true, 00:16:36.957 "data_offset": 256, 00:16:36.957 "data_size": 7936 00:16:36.957 } 00:16:36.957 ] 00:16:36.957 }' 00:16:36.958 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.958 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.958 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.218 [2024-11-20 13:30:18.665406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.218 [2024-11-20 13:30:18.716163] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.218 [2024-11-20 13:30:18.716315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.218 [2024-11-20 13:30:18.716377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.218 [2024-11-20 13:30:18.716402] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.218 "name": "raid_bdev1", 00:16:37.218 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:37.218 "strip_size_kb": 0, 00:16:37.218 "state": "online", 00:16:37.218 "raid_level": "raid1", 00:16:37.218 "superblock": true, 00:16:37.218 "num_base_bdevs": 2, 00:16:37.218 "num_base_bdevs_discovered": 1, 00:16:37.218 "num_base_bdevs_operational": 1, 00:16:37.218 "base_bdevs_list": [ 00:16:37.218 { 00:16:37.218 "name": null, 00:16:37.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.218 "is_configured": false, 00:16:37.218 "data_offset": 0, 00:16:37.218 "data_size": 7936 00:16:37.218 }, 00:16:37.218 { 00:16:37.218 "name": "BaseBdev2", 00:16:37.218 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:37.218 "is_configured": true, 00:16:37.218 "data_offset": 256, 00:16:37.218 "data_size": 7936 00:16:37.218 } 00:16:37.218 ] 00:16:37.218 }' 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.218 13:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.787 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:37.787 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.787 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.787 [2024-11-20 13:30:19.207124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:37.787 [2024-11-20 13:30:19.207205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.787 [2024-11-20 13:30:19.207233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:37.787 [2024-11-20 13:30:19.207243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.787 [2024-11-20 13:30:19.207522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.787 [2024-11-20 13:30:19.207541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:37.787 [2024-11-20 13:30:19.207616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:37.787 [2024-11-20 13:30:19.207630] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.787 [2024-11-20 13:30:19.207649] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:37.787 [2024-11-20 13:30:19.207670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.787 [2024-11-20 13:30:19.210242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:16:37.787 [2024-11-20 13:30:19.212398] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.787 spare 00:16:37.787 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.787 13:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.728 "name": "raid_bdev1", 00:16:38.728 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:38.728 "strip_size_kb": 0, 00:16:38.728 "state": "online", 00:16:38.728 "raid_level": "raid1", 00:16:38.728 "superblock": true, 00:16:38.728 "num_base_bdevs": 2, 00:16:38.728 "num_base_bdevs_discovered": 2, 00:16:38.728 "num_base_bdevs_operational": 2, 00:16:38.728 "process": { 00:16:38.728 "type": "rebuild", 00:16:38.728 "target": "spare", 00:16:38.728 "progress": { 00:16:38.728 "blocks": 2560, 00:16:38.728 "percent": 32 00:16:38.728 } 00:16:38.728 }, 00:16:38.728 "base_bdevs_list": [ 00:16:38.728 { 00:16:38.728 "name": "spare", 00:16:38.728 "uuid": "88af9d8e-867a-5775-b67f-547cd8fa4c72", 00:16:38.728 "is_configured": true, 00:16:38.728 "data_offset": 256, 00:16:38.728 "data_size": 7936 00:16:38.728 }, 00:16:38.728 { 00:16:38.728 "name": "BaseBdev2", 00:16:38.728 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:38.728 "is_configured": true, 00:16:38.728 "data_offset": 256, 00:16:38.728 "data_size": 7936 00:16:38.728 } 00:16:38.728 ] 00:16:38.728 }' 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.728 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.728 [2024-11-20 13:30:20.347658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.988 [2024-11-20 13:30:20.417661] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.988 [2024-11-20 13:30:20.417745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.988 [2024-11-20 13:30:20.417760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.988 [2024-11-20 13:30:20.417770] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.988 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.989 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.989 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.989 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.989 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.989 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.989 "name": "raid_bdev1", 00:16:38.989 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:38.989 "strip_size_kb": 0, 00:16:38.989 "state": "online", 00:16:38.989 "raid_level": "raid1", 00:16:38.989 "superblock": true, 00:16:38.989 "num_base_bdevs": 2, 00:16:38.989 "num_base_bdevs_discovered": 1, 00:16:38.989 "num_base_bdevs_operational": 1, 00:16:38.989 "base_bdevs_list": [ 00:16:38.989 { 00:16:38.989 "name": null, 00:16:38.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.989 "is_configured": false, 00:16:38.989 "data_offset": 0, 00:16:38.989 "data_size": 7936 00:16:38.989 }, 00:16:38.989 { 00:16:38.989 "name": "BaseBdev2", 00:16:38.989 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:38.989 "is_configured": true, 00:16:38.989 "data_offset": 256, 00:16:38.989 "data_size": 7936 00:16:38.989 } 00:16:38.989 ] 00:16:38.989 }' 00:16:38.989 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.989 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.249 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.249 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.249 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.249 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.249 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.249 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.249 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.249 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.249 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.249 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.249 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.249 "name": "raid_bdev1", 00:16:39.249 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:39.249 "strip_size_kb": 0, 00:16:39.249 "state": "online", 00:16:39.249 "raid_level": "raid1", 00:16:39.249 "superblock": true, 00:16:39.249 "num_base_bdevs": 2, 00:16:39.249 "num_base_bdevs_discovered": 1, 00:16:39.249 "num_base_bdevs_operational": 1, 00:16:39.249 "base_bdevs_list": [ 00:16:39.249 { 00:16:39.249 "name": null, 00:16:39.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.250 "is_configured": false, 00:16:39.250 "data_offset": 0, 00:16:39.250 "data_size": 7936 00:16:39.250 }, 00:16:39.250 { 00:16:39.250 "name": "BaseBdev2", 00:16:39.250 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:39.250 "is_configured": true, 00:16:39.250 "data_offset": 256, 00:16:39.250 "data_size": 7936 00:16:39.250 } 00:16:39.250 ] 00:16:39.250 }' 00:16:39.250 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.509 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.509 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.510 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.510 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:39.510 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.510 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.510 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.510 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:39.510 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.510 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.510 [2024-11-20 13:30:20.984273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:39.510 [2024-11-20 13:30:20.984413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.510 [2024-11-20 13:30:20.984441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:39.510 [2024-11-20 13:30:20.984454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.510 [2024-11-20 13:30:20.984686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.510 [2024-11-20 13:30:20.984702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:39.510 [2024-11-20 13:30:20.984763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:39.510 [2024-11-20 13:30:20.984786] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:39.510 [2024-11-20 13:30:20.984805] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:39.510 [2024-11-20 13:30:20.984820] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:39.510 BaseBdev1 00:16:39.510 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.510 13:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.448 13:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.448 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.448 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.448 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.448 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.448 "name": "raid_bdev1", 00:16:40.448 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:40.448 "strip_size_kb": 0, 00:16:40.448 "state": "online", 00:16:40.448 "raid_level": "raid1", 00:16:40.448 "superblock": true, 00:16:40.448 "num_base_bdevs": 2, 00:16:40.448 "num_base_bdevs_discovered": 1, 00:16:40.448 "num_base_bdevs_operational": 1, 00:16:40.448 "base_bdevs_list": [ 00:16:40.448 { 00:16:40.448 "name": null, 00:16:40.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.448 "is_configured": false, 00:16:40.448 "data_offset": 0, 00:16:40.448 "data_size": 7936 00:16:40.448 }, 00:16:40.448 { 00:16:40.448 "name": "BaseBdev2", 00:16:40.448 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:40.449 "is_configured": true, 00:16:40.449 "data_offset": 256, 00:16:40.449 "data_size": 7936 00:16:40.449 } 00:16:40.449 ] 00:16:40.449 }' 00:16:40.449 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.449 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.103 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.103 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.103 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.103 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.103 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.103 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.103 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.103 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.103 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.103 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.103 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.103 "name": "raid_bdev1", 00:16:41.103 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:41.103 "strip_size_kb": 0, 00:16:41.103 "state": "online", 00:16:41.103 "raid_level": "raid1", 00:16:41.103 "superblock": true, 00:16:41.103 "num_base_bdevs": 2, 00:16:41.103 "num_base_bdevs_discovered": 1, 00:16:41.103 "num_base_bdevs_operational": 1, 00:16:41.103 "base_bdevs_list": [ 00:16:41.103 { 00:16:41.103 "name": null, 00:16:41.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.103 "is_configured": false, 00:16:41.103 "data_offset": 0, 00:16:41.103 "data_size": 7936 00:16:41.103 }, 00:16:41.103 { 00:16:41.103 "name": "BaseBdev2", 00:16:41.103 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:41.103 "is_configured": true, 00:16:41.103 "data_offset": 256, 00:16:41.103 "data_size": 7936 00:16:41.103 } 00:16:41.103 ] 00:16:41.103 }' 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.104 [2024-11-20 13:30:22.609681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.104 [2024-11-20 13:30:22.609955] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:41.104 [2024-11-20 13:30:22.610038] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:41.104 request: 00:16:41.104 { 00:16:41.104 "base_bdev": "BaseBdev1", 00:16:41.104 "raid_bdev": "raid_bdev1", 00:16:41.104 "method": "bdev_raid_add_base_bdev", 00:16:41.104 "req_id": 1 00:16:41.104 } 00:16:41.104 Got JSON-RPC error response 00:16:41.104 response: 00:16:41.104 { 00:16:41.104 "code": -22, 00:16:41.104 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:41.104 } 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:41.104 13:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.042 "name": "raid_bdev1", 00:16:42.042 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:42.042 "strip_size_kb": 0, 00:16:42.042 "state": "online", 00:16:42.042 "raid_level": "raid1", 00:16:42.042 "superblock": true, 00:16:42.042 "num_base_bdevs": 2, 00:16:42.042 "num_base_bdevs_discovered": 1, 00:16:42.042 "num_base_bdevs_operational": 1, 00:16:42.042 "base_bdevs_list": [ 00:16:42.042 { 00:16:42.042 "name": null, 00:16:42.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.042 "is_configured": false, 00:16:42.042 "data_offset": 0, 00:16:42.042 "data_size": 7936 00:16:42.042 }, 00:16:42.042 { 00:16:42.042 "name": "BaseBdev2", 00:16:42.042 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:42.042 "is_configured": true, 00:16:42.042 "data_offset": 256, 00:16:42.042 "data_size": 7936 00:16:42.042 } 00:16:42.042 ] 00:16:42.042 }' 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.042 13:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.611 "name": "raid_bdev1", 00:16:42.611 "uuid": "cc2b2e72-68ff-4fff-b12c-52c2952ffd50", 00:16:42.611 "strip_size_kb": 0, 00:16:42.611 "state": "online", 00:16:42.611 "raid_level": "raid1", 00:16:42.611 "superblock": true, 00:16:42.611 "num_base_bdevs": 2, 00:16:42.611 "num_base_bdevs_discovered": 1, 00:16:42.611 "num_base_bdevs_operational": 1, 00:16:42.611 "base_bdevs_list": [ 00:16:42.611 { 00:16:42.611 "name": null, 00:16:42.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.611 "is_configured": false, 00:16:42.611 "data_offset": 0, 00:16:42.611 "data_size": 7936 00:16:42.611 }, 00:16:42.611 { 00:16:42.611 "name": "BaseBdev2", 00:16:42.611 "uuid": "761eb431-7e4a-53f5-afeb-5ded3971d301", 00:16:42.611 "is_configured": true, 00:16:42.611 "data_offset": 256, 00:16:42.611 "data_size": 7936 00:16:42.611 } 00:16:42.611 ] 00:16:42.611 }' 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.611 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 97867 00:16:42.612 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 97867 ']' 00:16:42.612 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 97867 00:16:42.612 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:42.612 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:42.612 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97867 00:16:42.612 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:42.612 killing process with pid 97867 00:16:42.612 Received shutdown signal, test time was about 60.000000 seconds 00:16:42.612 00:16:42.612 Latency(us) 00:16:42.612 [2024-11-20T13:30:24.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.612 [2024-11-20T13:30:24.280Z] =================================================================================================================== 00:16:42.612 [2024-11-20T13:30:24.280Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:42.612 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:42.612 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97867' 00:16:42.612 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 97867 00:16:42.612 [2024-11-20 13:30:24.263742] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.612 [2024-11-20 13:30:24.263927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.612 [2024-11-20 13:30:24.264004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.612 [2024-11-20 13:30:24.264016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:42.612 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 97867 00:16:42.871 [2024-11-20 13:30:24.299690] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.871 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:42.871 00:16:42.871 real 0m18.639s 00:16:42.871 user 0m24.925s 00:16:42.871 sys 0m2.637s 00:16:42.871 ************************************ 00:16:42.871 END TEST raid_rebuild_test_sb_md_separate 00:16:42.871 ************************************ 00:16:42.871 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.871 13:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.131 13:30:24 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:43.131 13:30:24 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:43.131 13:30:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:43.131 13:30:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.131 13:30:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.131 ************************************ 00:16:43.131 START TEST raid_state_function_test_sb_md_interleaved 00:16:43.131 ************************************ 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:43.131 Process raid pid: 98552 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98552 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98552' 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98552 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 98552 ']' 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.131 13:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.131 [2024-11-20 13:30:24.652633] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:43.131 [2024-11-20 13:30:24.652897] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.390 [2024-11-20 13:30:24.808792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.390 [2024-11-20 13:30:24.837837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.390 [2024-11-20 13:30:24.880738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.390 [2024-11-20 13:30:24.880856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.992 [2024-11-20 13:30:25.558200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:43.992 [2024-11-20 13:30:25.558313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:43.992 [2024-11-20 13:30:25.558329] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.992 [2024-11-20 13:30:25.558340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.992 "name": "Existed_Raid", 00:16:43.992 "uuid": "5409ad49-c44f-4d23-b6c3-65003e6ecfb1", 00:16:43.992 "strip_size_kb": 0, 00:16:43.992 "state": "configuring", 00:16:43.992 "raid_level": "raid1", 00:16:43.992 "superblock": true, 00:16:43.992 "num_base_bdevs": 2, 00:16:43.992 "num_base_bdevs_discovered": 0, 00:16:43.992 "num_base_bdevs_operational": 2, 00:16:43.992 "base_bdevs_list": [ 00:16:43.992 { 00:16:43.992 "name": "BaseBdev1", 00:16:43.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.992 "is_configured": false, 00:16:43.992 "data_offset": 0, 00:16:43.992 "data_size": 0 00:16:43.992 }, 00:16:43.992 { 00:16:43.992 "name": "BaseBdev2", 00:16:43.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.992 "is_configured": false, 00:16:43.992 "data_offset": 0, 00:16:43.992 "data_size": 0 00:16:43.992 } 00:16:43.992 ] 00:16:43.992 }' 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.992 13:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.561 [2024-11-20 13:30:26.033321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:44.561 [2024-11-20 13:30:26.033425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.561 [2024-11-20 13:30:26.041295] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:44.561 [2024-11-20 13:30:26.041377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:44.561 [2024-11-20 13:30:26.041422] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.561 [2024-11-20 13:30:26.041462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.561 [2024-11-20 13:30:26.058391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.561 BaseBdev1 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.561 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.561 [ 00:16:44.561 { 00:16:44.561 "name": "BaseBdev1", 00:16:44.561 "aliases": [ 00:16:44.561 "9bba770a-9d8f-4358-98a2-87d37496d67d" 00:16:44.561 ], 00:16:44.561 "product_name": "Malloc disk", 00:16:44.561 "block_size": 4128, 00:16:44.561 "num_blocks": 8192, 00:16:44.561 "uuid": "9bba770a-9d8f-4358-98a2-87d37496d67d", 00:16:44.561 "md_size": 32, 00:16:44.561 "md_interleave": true, 00:16:44.561 "dif_type": 0, 00:16:44.561 "assigned_rate_limits": { 00:16:44.561 "rw_ios_per_sec": 0, 00:16:44.561 "rw_mbytes_per_sec": 0, 00:16:44.561 "r_mbytes_per_sec": 0, 00:16:44.561 "w_mbytes_per_sec": 0 00:16:44.561 }, 00:16:44.561 "claimed": true, 00:16:44.561 "claim_type": "exclusive_write", 00:16:44.561 "zoned": false, 00:16:44.561 "supported_io_types": { 00:16:44.561 "read": true, 00:16:44.561 "write": true, 00:16:44.561 "unmap": true, 00:16:44.561 "flush": true, 00:16:44.561 "reset": true, 00:16:44.561 "nvme_admin": false, 00:16:44.561 "nvme_io": false, 00:16:44.561 "nvme_io_md": false, 00:16:44.561 "write_zeroes": true, 00:16:44.561 "zcopy": true, 00:16:44.561 "get_zone_info": false, 00:16:44.561 "zone_management": false, 00:16:44.561 "zone_append": false, 00:16:44.561 "compare": false, 00:16:44.561 "compare_and_write": false, 00:16:44.561 "abort": true, 00:16:44.561 "seek_hole": false, 00:16:44.561 "seek_data": false, 00:16:44.561 "copy": true, 00:16:44.561 "nvme_iov_md": false 00:16:44.561 }, 00:16:44.561 "memory_domains": [ 00:16:44.562 { 00:16:44.562 "dma_device_id": "system", 00:16:44.562 "dma_device_type": 1 00:16:44.562 }, 00:16:44.562 { 00:16:44.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.562 "dma_device_type": 2 00:16:44.562 } 00:16:44.562 ], 00:16:44.562 "driver_specific": {} 00:16:44.562 } 00:16:44.562 ] 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.562 "name": "Existed_Raid", 00:16:44.562 "uuid": "35380db2-72f0-4559-908e-7967e628f123", 00:16:44.562 "strip_size_kb": 0, 00:16:44.562 "state": "configuring", 00:16:44.562 "raid_level": "raid1", 00:16:44.562 "superblock": true, 00:16:44.562 "num_base_bdevs": 2, 00:16:44.562 "num_base_bdevs_discovered": 1, 00:16:44.562 "num_base_bdevs_operational": 2, 00:16:44.562 "base_bdevs_list": [ 00:16:44.562 { 00:16:44.562 "name": "BaseBdev1", 00:16:44.562 "uuid": "9bba770a-9d8f-4358-98a2-87d37496d67d", 00:16:44.562 "is_configured": true, 00:16:44.562 "data_offset": 256, 00:16:44.562 "data_size": 7936 00:16:44.562 }, 00:16:44.562 { 00:16:44.562 "name": "BaseBdev2", 00:16:44.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.562 "is_configured": false, 00:16:44.562 "data_offset": 0, 00:16:44.562 "data_size": 0 00:16:44.562 } 00:16:44.562 ] 00:16:44.562 }' 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.562 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.131 [2024-11-20 13:30:26.541685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.131 [2024-11-20 13:30:26.541844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.131 [2024-11-20 13:30:26.549695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.131 [2024-11-20 13:30:26.551774] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.131 [2024-11-20 13:30:26.551863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.131 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.132 "name": "Existed_Raid", 00:16:45.132 "uuid": "edc2832a-7555-49f5-8743-fc88c2157b6d", 00:16:45.132 "strip_size_kb": 0, 00:16:45.132 "state": "configuring", 00:16:45.132 "raid_level": "raid1", 00:16:45.132 "superblock": true, 00:16:45.132 "num_base_bdevs": 2, 00:16:45.132 "num_base_bdevs_discovered": 1, 00:16:45.132 "num_base_bdevs_operational": 2, 00:16:45.132 "base_bdevs_list": [ 00:16:45.132 { 00:16:45.132 "name": "BaseBdev1", 00:16:45.132 "uuid": "9bba770a-9d8f-4358-98a2-87d37496d67d", 00:16:45.132 "is_configured": true, 00:16:45.132 "data_offset": 256, 00:16:45.132 "data_size": 7936 00:16:45.132 }, 00:16:45.132 { 00:16:45.132 "name": "BaseBdev2", 00:16:45.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.132 "is_configured": false, 00:16:45.132 "data_offset": 0, 00:16:45.132 "data_size": 0 00:16:45.132 } 00:16:45.132 ] 00:16:45.132 }' 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.132 13:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.391 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:45.391 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.391 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.651 [2024-11-20 13:30:27.060072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.651 [2024-11-20 13:30:27.060383] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:45.651 [2024-11-20 13:30:27.060408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:45.651 [2024-11-20 13:30:27.060518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:45.651 [2024-11-20 13:30:27.060601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:45.651 [2024-11-20 13:30:27.060617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:16:45.651 [2024-11-20 13:30:27.060707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.651 BaseBdev2 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.651 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.651 [ 00:16:45.651 { 00:16:45.651 "name": "BaseBdev2", 00:16:45.651 "aliases": [ 00:16:45.651 "e653639b-1fb6-47b1-9d86-63ec727205f5" 00:16:45.651 ], 00:16:45.651 "product_name": "Malloc disk", 00:16:45.651 "block_size": 4128, 00:16:45.651 "num_blocks": 8192, 00:16:45.651 "uuid": "e653639b-1fb6-47b1-9d86-63ec727205f5", 00:16:45.651 "md_size": 32, 00:16:45.652 "md_interleave": true, 00:16:45.652 "dif_type": 0, 00:16:45.652 "assigned_rate_limits": { 00:16:45.652 "rw_ios_per_sec": 0, 00:16:45.652 "rw_mbytes_per_sec": 0, 00:16:45.652 "r_mbytes_per_sec": 0, 00:16:45.652 "w_mbytes_per_sec": 0 00:16:45.652 }, 00:16:45.652 "claimed": true, 00:16:45.652 "claim_type": "exclusive_write", 00:16:45.652 "zoned": false, 00:16:45.652 "supported_io_types": { 00:16:45.652 "read": true, 00:16:45.652 "write": true, 00:16:45.652 "unmap": true, 00:16:45.652 "flush": true, 00:16:45.652 "reset": true, 00:16:45.652 "nvme_admin": false, 00:16:45.652 "nvme_io": false, 00:16:45.652 "nvme_io_md": false, 00:16:45.652 "write_zeroes": true, 00:16:45.652 "zcopy": true, 00:16:45.652 "get_zone_info": false, 00:16:45.652 "zone_management": false, 00:16:45.652 "zone_append": false, 00:16:45.652 "compare": false, 00:16:45.652 "compare_and_write": false, 00:16:45.652 "abort": true, 00:16:45.652 "seek_hole": false, 00:16:45.652 "seek_data": false, 00:16:45.652 "copy": true, 00:16:45.652 "nvme_iov_md": false 00:16:45.652 }, 00:16:45.652 "memory_domains": [ 00:16:45.652 { 00:16:45.652 "dma_device_id": "system", 00:16:45.652 "dma_device_type": 1 00:16:45.652 }, 00:16:45.652 { 00:16:45.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.652 "dma_device_type": 2 00:16:45.652 } 00:16:45.652 ], 00:16:45.652 "driver_specific": {} 00:16:45.652 } 00:16:45.652 ] 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.652 "name": "Existed_Raid", 00:16:45.652 "uuid": "edc2832a-7555-49f5-8743-fc88c2157b6d", 00:16:45.652 "strip_size_kb": 0, 00:16:45.652 "state": "online", 00:16:45.652 "raid_level": "raid1", 00:16:45.652 "superblock": true, 00:16:45.652 "num_base_bdevs": 2, 00:16:45.652 "num_base_bdevs_discovered": 2, 00:16:45.652 "num_base_bdevs_operational": 2, 00:16:45.652 "base_bdevs_list": [ 00:16:45.652 { 00:16:45.652 "name": "BaseBdev1", 00:16:45.652 "uuid": "9bba770a-9d8f-4358-98a2-87d37496d67d", 00:16:45.652 "is_configured": true, 00:16:45.652 "data_offset": 256, 00:16:45.652 "data_size": 7936 00:16:45.652 }, 00:16:45.652 { 00:16:45.652 "name": "BaseBdev2", 00:16:45.652 "uuid": "e653639b-1fb6-47b1-9d86-63ec727205f5", 00:16:45.652 "is_configured": true, 00:16:45.652 "data_offset": 256, 00:16:45.652 "data_size": 7936 00:16:45.652 } 00:16:45.652 ] 00:16:45.652 }' 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.652 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.231 [2024-11-20 13:30:27.603703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.231 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:46.231 "name": "Existed_Raid", 00:16:46.231 "aliases": [ 00:16:46.231 "edc2832a-7555-49f5-8743-fc88c2157b6d" 00:16:46.231 ], 00:16:46.231 "product_name": "Raid Volume", 00:16:46.231 "block_size": 4128, 00:16:46.231 "num_blocks": 7936, 00:16:46.231 "uuid": "edc2832a-7555-49f5-8743-fc88c2157b6d", 00:16:46.231 "md_size": 32, 00:16:46.231 "md_interleave": true, 00:16:46.231 "dif_type": 0, 00:16:46.231 "assigned_rate_limits": { 00:16:46.231 "rw_ios_per_sec": 0, 00:16:46.231 "rw_mbytes_per_sec": 0, 00:16:46.231 "r_mbytes_per_sec": 0, 00:16:46.231 "w_mbytes_per_sec": 0 00:16:46.231 }, 00:16:46.231 "claimed": false, 00:16:46.231 "zoned": false, 00:16:46.231 "supported_io_types": { 00:16:46.231 "read": true, 00:16:46.231 "write": true, 00:16:46.231 "unmap": false, 00:16:46.231 "flush": false, 00:16:46.231 "reset": true, 00:16:46.231 "nvme_admin": false, 00:16:46.231 "nvme_io": false, 00:16:46.231 "nvme_io_md": false, 00:16:46.231 "write_zeroes": true, 00:16:46.231 "zcopy": false, 00:16:46.231 "get_zone_info": false, 00:16:46.231 "zone_management": false, 00:16:46.231 "zone_append": false, 00:16:46.231 "compare": false, 00:16:46.231 "compare_and_write": false, 00:16:46.231 "abort": false, 00:16:46.231 "seek_hole": false, 00:16:46.231 "seek_data": false, 00:16:46.231 "copy": false, 00:16:46.231 "nvme_iov_md": false 00:16:46.231 }, 00:16:46.231 "memory_domains": [ 00:16:46.231 { 00:16:46.231 "dma_device_id": "system", 00:16:46.231 "dma_device_type": 1 00:16:46.231 }, 00:16:46.231 { 00:16:46.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.231 "dma_device_type": 2 00:16:46.231 }, 00:16:46.231 { 00:16:46.231 "dma_device_id": "system", 00:16:46.231 "dma_device_type": 1 00:16:46.231 }, 00:16:46.231 { 00:16:46.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.232 "dma_device_type": 2 00:16:46.232 } 00:16:46.232 ], 00:16:46.232 "driver_specific": { 00:16:46.232 "raid": { 00:16:46.232 "uuid": "edc2832a-7555-49f5-8743-fc88c2157b6d", 00:16:46.232 "strip_size_kb": 0, 00:16:46.232 "state": "online", 00:16:46.232 "raid_level": "raid1", 00:16:46.232 "superblock": true, 00:16:46.232 "num_base_bdevs": 2, 00:16:46.232 "num_base_bdevs_discovered": 2, 00:16:46.232 "num_base_bdevs_operational": 2, 00:16:46.232 "base_bdevs_list": [ 00:16:46.232 { 00:16:46.232 "name": "BaseBdev1", 00:16:46.232 "uuid": "9bba770a-9d8f-4358-98a2-87d37496d67d", 00:16:46.232 "is_configured": true, 00:16:46.232 "data_offset": 256, 00:16:46.232 "data_size": 7936 00:16:46.232 }, 00:16:46.232 { 00:16:46.232 "name": "BaseBdev2", 00:16:46.232 "uuid": "e653639b-1fb6-47b1-9d86-63ec727205f5", 00:16:46.232 "is_configured": true, 00:16:46.232 "data_offset": 256, 00:16:46.232 "data_size": 7936 00:16:46.232 } 00:16:46.232 ] 00:16:46.232 } 00:16:46.232 } 00:16:46.232 }' 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:46.232 BaseBdev2' 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.232 [2024-11-20 13:30:27.819075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.232 "name": "Existed_Raid", 00:16:46.232 "uuid": "edc2832a-7555-49f5-8743-fc88c2157b6d", 00:16:46.232 "strip_size_kb": 0, 00:16:46.232 "state": "online", 00:16:46.232 "raid_level": "raid1", 00:16:46.232 "superblock": true, 00:16:46.232 "num_base_bdevs": 2, 00:16:46.232 "num_base_bdevs_discovered": 1, 00:16:46.232 "num_base_bdevs_operational": 1, 00:16:46.232 "base_bdevs_list": [ 00:16:46.232 { 00:16:46.232 "name": null, 00:16:46.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.232 "is_configured": false, 00:16:46.232 "data_offset": 0, 00:16:46.232 "data_size": 7936 00:16:46.232 }, 00:16:46.232 { 00:16:46.232 "name": "BaseBdev2", 00:16:46.232 "uuid": "e653639b-1fb6-47b1-9d86-63ec727205f5", 00:16:46.232 "is_configured": true, 00:16:46.232 "data_offset": 256, 00:16:46.232 "data_size": 7936 00:16:46.232 } 00:16:46.232 ] 00:16:46.232 }' 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.232 13:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.815 [2024-11-20 13:30:28.342206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:46.815 [2024-11-20 13:30:28.342385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.815 [2024-11-20 13:30:28.354605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.815 [2024-11-20 13:30:28.354731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.815 [2024-11-20 13:30:28.354783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.815 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98552 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 98552 ']' 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 98552 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98552 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.816 killing process with pid 98552 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98552' 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 98552 00:16:46.816 [2024-11-20 13:30:28.453379] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.816 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 98552 00:16:46.816 [2024-11-20 13:30:28.454491] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:47.075 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:47.075 00:16:47.075 real 0m4.114s 00:16:47.075 user 0m6.544s 00:16:47.075 sys 0m0.858s 00:16:47.075 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.075 ************************************ 00:16:47.075 END TEST raid_state_function_test_sb_md_interleaved 00:16:47.076 ************************************ 00:16:47.076 13:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.076 13:30:28 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:47.076 13:30:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:47.076 13:30:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.076 13:30:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.335 ************************************ 00:16:47.335 START TEST raid_superblock_test_md_interleaved 00:16:47.335 ************************************ 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=98790 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 98790 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 98790 ']' 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.335 13:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.335 [2024-11-20 13:30:28.832761] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:47.335 [2024-11-20 13:30:28.832903] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98790 ] 00:16:47.335 [2024-11-20 13:30:28.973307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.594 [2024-11-20 13:30:29.004863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.594 [2024-11-20 13:30:29.054825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.595 [2024-11-20 13:30:29.054972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.168 malloc1 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.168 [2024-11-20 13:30:29.770972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.168 [2024-11-20 13:30:29.771134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.168 [2024-11-20 13:30:29.771196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:48.168 [2024-11-20 13:30:29.771235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.168 [2024-11-20 13:30:29.773522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.168 [2024-11-20 13:30:29.773600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.168 pt1 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.168 malloc2 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.168 [2024-11-20 13:30:29.800434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.168 [2024-11-20 13:30:29.800567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.168 [2024-11-20 13:30:29.800592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:48.168 [2024-11-20 13:30:29.800604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.168 [2024-11-20 13:30:29.802720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.168 [2024-11-20 13:30:29.802763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.168 pt2 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.168 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.168 [2024-11-20 13:30:29.812495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.168 [2024-11-20 13:30:29.814680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.168 [2024-11-20 13:30:29.814864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:48.168 [2024-11-20 13:30:29.814888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:48.168 [2024-11-20 13:30:29.815038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:48.168 [2024-11-20 13:30:29.815118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:48.169 [2024-11-20 13:30:29.815130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:48.169 [2024-11-20 13:30:29.815230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.169 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.429 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.429 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.429 "name": "raid_bdev1", 00:16:48.429 "uuid": "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1", 00:16:48.429 "strip_size_kb": 0, 00:16:48.429 "state": "online", 00:16:48.429 "raid_level": "raid1", 00:16:48.429 "superblock": true, 00:16:48.429 "num_base_bdevs": 2, 00:16:48.429 "num_base_bdevs_discovered": 2, 00:16:48.429 "num_base_bdevs_operational": 2, 00:16:48.429 "base_bdevs_list": [ 00:16:48.429 { 00:16:48.429 "name": "pt1", 00:16:48.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.429 "is_configured": true, 00:16:48.429 "data_offset": 256, 00:16:48.429 "data_size": 7936 00:16:48.429 }, 00:16:48.429 { 00:16:48.429 "name": "pt2", 00:16:48.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.429 "is_configured": true, 00:16:48.429 "data_offset": 256, 00:16:48.429 "data_size": 7936 00:16:48.429 } 00:16:48.429 ] 00:16:48.429 }' 00:16:48.429 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.429 13:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.691 [2024-11-20 13:30:30.288034] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:48.691 "name": "raid_bdev1", 00:16:48.691 "aliases": [ 00:16:48.691 "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1" 00:16:48.691 ], 00:16:48.691 "product_name": "Raid Volume", 00:16:48.691 "block_size": 4128, 00:16:48.691 "num_blocks": 7936, 00:16:48.691 "uuid": "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1", 00:16:48.691 "md_size": 32, 00:16:48.691 "md_interleave": true, 00:16:48.691 "dif_type": 0, 00:16:48.691 "assigned_rate_limits": { 00:16:48.691 "rw_ios_per_sec": 0, 00:16:48.691 "rw_mbytes_per_sec": 0, 00:16:48.691 "r_mbytes_per_sec": 0, 00:16:48.691 "w_mbytes_per_sec": 0 00:16:48.691 }, 00:16:48.691 "claimed": false, 00:16:48.691 "zoned": false, 00:16:48.691 "supported_io_types": { 00:16:48.691 "read": true, 00:16:48.691 "write": true, 00:16:48.691 "unmap": false, 00:16:48.691 "flush": false, 00:16:48.691 "reset": true, 00:16:48.691 "nvme_admin": false, 00:16:48.691 "nvme_io": false, 00:16:48.691 "nvme_io_md": false, 00:16:48.691 "write_zeroes": true, 00:16:48.691 "zcopy": false, 00:16:48.691 "get_zone_info": false, 00:16:48.691 "zone_management": false, 00:16:48.691 "zone_append": false, 00:16:48.691 "compare": false, 00:16:48.691 "compare_and_write": false, 00:16:48.691 "abort": false, 00:16:48.691 "seek_hole": false, 00:16:48.691 "seek_data": false, 00:16:48.691 "copy": false, 00:16:48.691 "nvme_iov_md": false 00:16:48.691 }, 00:16:48.691 "memory_domains": [ 00:16:48.691 { 00:16:48.691 "dma_device_id": "system", 00:16:48.691 "dma_device_type": 1 00:16:48.691 }, 00:16:48.691 { 00:16:48.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.691 "dma_device_type": 2 00:16:48.691 }, 00:16:48.691 { 00:16:48.691 "dma_device_id": "system", 00:16:48.691 "dma_device_type": 1 00:16:48.691 }, 00:16:48.691 { 00:16:48.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.691 "dma_device_type": 2 00:16:48.691 } 00:16:48.691 ], 00:16:48.691 "driver_specific": { 00:16:48.691 "raid": { 00:16:48.691 "uuid": "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1", 00:16:48.691 "strip_size_kb": 0, 00:16:48.691 "state": "online", 00:16:48.691 "raid_level": "raid1", 00:16:48.691 "superblock": true, 00:16:48.691 "num_base_bdevs": 2, 00:16:48.691 "num_base_bdevs_discovered": 2, 00:16:48.691 "num_base_bdevs_operational": 2, 00:16:48.691 "base_bdevs_list": [ 00:16:48.691 { 00:16:48.691 "name": "pt1", 00:16:48.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.691 "is_configured": true, 00:16:48.691 "data_offset": 256, 00:16:48.691 "data_size": 7936 00:16:48.691 }, 00:16:48.691 { 00:16:48.691 "name": "pt2", 00:16:48.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.691 "is_configured": true, 00:16:48.691 "data_offset": 256, 00:16:48.691 "data_size": 7936 00:16:48.691 } 00:16:48.691 ] 00:16:48.691 } 00:16:48.691 } 00:16:48.691 }' 00:16:48.691 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:48.952 pt2' 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:48.952 [2024-11-20 13:30:30.527590] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8feea0a5-fb95-4958-8314-cbc7fc6bc8d1 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 8feea0a5-fb95-4958-8314-cbc7fc6bc8d1 ']' 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.952 [2024-11-20 13:30:30.583181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.952 [2024-11-20 13:30:30.583225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.952 [2024-11-20 13:30:30.583378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.952 [2024-11-20 13:30:30.583495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.952 [2024-11-20 13:30:30.583510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.952 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.212 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.213 [2024-11-20 13:30:30.726951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:49.213 [2024-11-20 13:30:30.729373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:49.213 [2024-11-20 13:30:30.729543] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:49.213 [2024-11-20 13:30:30.729613] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:49.213 [2024-11-20 13:30:30.729633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.213 [2024-11-20 13:30:30.729645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:16:49.213 request: 00:16:49.213 { 00:16:49.213 "name": "raid_bdev1", 00:16:49.213 "raid_level": "raid1", 00:16:49.213 "base_bdevs": [ 00:16:49.213 "malloc1", 00:16:49.213 "malloc2" 00:16:49.213 ], 00:16:49.213 "superblock": false, 00:16:49.213 "method": "bdev_raid_create", 00:16:49.213 "req_id": 1 00:16:49.213 } 00:16:49.213 Got JSON-RPC error response 00:16:49.213 response: 00:16:49.213 { 00:16:49.213 "code": -17, 00:16:49.213 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:49.213 } 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.213 [2024-11-20 13:30:30.786797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.213 [2024-11-20 13:30:30.786985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.213 [2024-11-20 13:30:30.787063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:49.213 [2024-11-20 13:30:30.787102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.213 [2024-11-20 13:30:30.789501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.213 [2024-11-20 13:30:30.789593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.213 [2024-11-20 13:30:30.789687] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:49.213 [2024-11-20 13:30:30.789763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:49.213 pt1 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.213 "name": "raid_bdev1", 00:16:49.213 "uuid": "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1", 00:16:49.213 "strip_size_kb": 0, 00:16:49.213 "state": "configuring", 00:16:49.213 "raid_level": "raid1", 00:16:49.213 "superblock": true, 00:16:49.213 "num_base_bdevs": 2, 00:16:49.213 "num_base_bdevs_discovered": 1, 00:16:49.213 "num_base_bdevs_operational": 2, 00:16:49.213 "base_bdevs_list": [ 00:16:49.213 { 00:16:49.213 "name": "pt1", 00:16:49.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.213 "is_configured": true, 00:16:49.213 "data_offset": 256, 00:16:49.213 "data_size": 7936 00:16:49.213 }, 00:16:49.213 { 00:16:49.213 "name": null, 00:16:49.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.213 "is_configured": false, 00:16:49.213 "data_offset": 256, 00:16:49.213 "data_size": 7936 00:16:49.213 } 00:16:49.213 ] 00:16:49.213 }' 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.213 13:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.783 [2024-11-20 13:30:31.258021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.783 [2024-11-20 13:30:31.258107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.783 [2024-11-20 13:30:31.258157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:49.783 [2024-11-20 13:30:31.258171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.783 [2024-11-20 13:30:31.258386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.783 [2024-11-20 13:30:31.258403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.783 [2024-11-20 13:30:31.258466] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:49.783 [2024-11-20 13:30:31.258498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.783 [2024-11-20 13:30:31.258593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:49.783 [2024-11-20 13:30:31.258602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:49.783 [2024-11-20 13:30:31.258706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:49.783 [2024-11-20 13:30:31.258774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:49.783 [2024-11-20 13:30:31.258789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:16:49.783 [2024-11-20 13:30:31.258858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.783 pt2 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.783 "name": "raid_bdev1", 00:16:49.783 "uuid": "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1", 00:16:49.783 "strip_size_kb": 0, 00:16:49.783 "state": "online", 00:16:49.783 "raid_level": "raid1", 00:16:49.783 "superblock": true, 00:16:49.783 "num_base_bdevs": 2, 00:16:49.783 "num_base_bdevs_discovered": 2, 00:16:49.783 "num_base_bdevs_operational": 2, 00:16:49.783 "base_bdevs_list": [ 00:16:49.783 { 00:16:49.783 "name": "pt1", 00:16:49.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.783 "is_configured": true, 00:16:49.783 "data_offset": 256, 00:16:49.783 "data_size": 7936 00:16:49.783 }, 00:16:49.783 { 00:16:49.783 "name": "pt2", 00:16:49.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.783 "is_configured": true, 00:16:49.783 "data_offset": 256, 00:16:49.783 "data_size": 7936 00:16:49.783 } 00:16:49.783 ] 00:16:49.783 }' 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.783 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.352 [2024-11-20 13:30:31.753484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.352 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.352 "name": "raid_bdev1", 00:16:50.352 "aliases": [ 00:16:50.352 "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1" 00:16:50.352 ], 00:16:50.352 "product_name": "Raid Volume", 00:16:50.352 "block_size": 4128, 00:16:50.352 "num_blocks": 7936, 00:16:50.352 "uuid": "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1", 00:16:50.352 "md_size": 32, 00:16:50.352 "md_interleave": true, 00:16:50.352 "dif_type": 0, 00:16:50.352 "assigned_rate_limits": { 00:16:50.352 "rw_ios_per_sec": 0, 00:16:50.352 "rw_mbytes_per_sec": 0, 00:16:50.352 "r_mbytes_per_sec": 0, 00:16:50.352 "w_mbytes_per_sec": 0 00:16:50.352 }, 00:16:50.352 "claimed": false, 00:16:50.353 "zoned": false, 00:16:50.353 "supported_io_types": { 00:16:50.353 "read": true, 00:16:50.353 "write": true, 00:16:50.353 "unmap": false, 00:16:50.353 "flush": false, 00:16:50.353 "reset": true, 00:16:50.353 "nvme_admin": false, 00:16:50.353 "nvme_io": false, 00:16:50.353 "nvme_io_md": false, 00:16:50.353 "write_zeroes": true, 00:16:50.353 "zcopy": false, 00:16:50.353 "get_zone_info": false, 00:16:50.353 "zone_management": false, 00:16:50.353 "zone_append": false, 00:16:50.353 "compare": false, 00:16:50.353 "compare_and_write": false, 00:16:50.353 "abort": false, 00:16:50.353 "seek_hole": false, 00:16:50.353 "seek_data": false, 00:16:50.353 "copy": false, 00:16:50.353 "nvme_iov_md": false 00:16:50.353 }, 00:16:50.353 "memory_domains": [ 00:16:50.353 { 00:16:50.353 "dma_device_id": "system", 00:16:50.353 "dma_device_type": 1 00:16:50.353 }, 00:16:50.353 { 00:16:50.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.353 "dma_device_type": 2 00:16:50.353 }, 00:16:50.353 { 00:16:50.353 "dma_device_id": "system", 00:16:50.353 "dma_device_type": 1 00:16:50.353 }, 00:16:50.353 { 00:16:50.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.353 "dma_device_type": 2 00:16:50.353 } 00:16:50.353 ], 00:16:50.353 "driver_specific": { 00:16:50.353 "raid": { 00:16:50.353 "uuid": "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1", 00:16:50.353 "strip_size_kb": 0, 00:16:50.353 "state": "online", 00:16:50.353 "raid_level": "raid1", 00:16:50.353 "superblock": true, 00:16:50.353 "num_base_bdevs": 2, 00:16:50.353 "num_base_bdevs_discovered": 2, 00:16:50.353 "num_base_bdevs_operational": 2, 00:16:50.353 "base_bdevs_list": [ 00:16:50.353 { 00:16:50.353 "name": "pt1", 00:16:50.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.353 "is_configured": true, 00:16:50.353 "data_offset": 256, 00:16:50.353 "data_size": 7936 00:16:50.353 }, 00:16:50.353 { 00:16:50.353 "name": "pt2", 00:16:50.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.353 "is_configured": true, 00:16:50.353 "data_offset": 256, 00:16:50.353 "data_size": 7936 00:16:50.353 } 00:16:50.353 ] 00:16:50.353 } 00:16:50.353 } 00:16:50.353 }' 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:50.353 pt2' 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.353 13:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:50.353 [2024-11-20 13:30:31.989125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.353 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 8feea0a5-fb95-4958-8314-cbc7fc6bc8d1 '!=' 8feea0a5-fb95-4958-8314-cbc7fc6bc8d1 ']' 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.612 [2024-11-20 13:30:32.032827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.612 "name": "raid_bdev1", 00:16:50.612 "uuid": "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1", 00:16:50.612 "strip_size_kb": 0, 00:16:50.612 "state": "online", 00:16:50.612 "raid_level": "raid1", 00:16:50.612 "superblock": true, 00:16:50.612 "num_base_bdevs": 2, 00:16:50.612 "num_base_bdevs_discovered": 1, 00:16:50.612 "num_base_bdevs_operational": 1, 00:16:50.612 "base_bdevs_list": [ 00:16:50.612 { 00:16:50.612 "name": null, 00:16:50.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.612 "is_configured": false, 00:16:50.612 "data_offset": 0, 00:16:50.612 "data_size": 7936 00:16:50.612 }, 00:16:50.612 { 00:16:50.612 "name": "pt2", 00:16:50.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.612 "is_configured": true, 00:16:50.612 "data_offset": 256, 00:16:50.612 "data_size": 7936 00:16:50.612 } 00:16:50.612 ] 00:16:50.612 }' 00:16:50.612 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.613 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.872 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:50.872 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.872 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.872 [2024-11-20 13:30:32.487940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.872 [2024-11-20 13:30:32.487977] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.872 [2024-11-20 13:30:32.488090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.872 [2024-11-20 13:30:32.488151] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.872 [2024-11-20 13:30:32.488162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:16:50.872 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.872 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:50.872 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.872 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.872 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.872 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.131 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:51.131 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:51.131 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:51.131 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:51.131 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:51.131 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.132 [2024-11-20 13:30:32.563822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.132 [2024-11-20 13:30:32.563926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.132 [2024-11-20 13:30:32.563953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:51.132 [2024-11-20 13:30:32.563964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.132 [2024-11-20 13:30:32.566187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.132 [2024-11-20 13:30:32.566281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.132 [2024-11-20 13:30:32.566361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:51.132 [2024-11-20 13:30:32.566401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.132 [2024-11-20 13:30:32.566473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:51.132 [2024-11-20 13:30:32.566482] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:51.132 [2024-11-20 13:30:32.566573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:51.132 [2024-11-20 13:30:32.566639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:51.132 [2024-11-20 13:30:32.566649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:16:51.132 [2024-11-20 13:30:32.566716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.132 pt2 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.132 "name": "raid_bdev1", 00:16:51.132 "uuid": "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1", 00:16:51.132 "strip_size_kb": 0, 00:16:51.132 "state": "online", 00:16:51.132 "raid_level": "raid1", 00:16:51.132 "superblock": true, 00:16:51.132 "num_base_bdevs": 2, 00:16:51.132 "num_base_bdevs_discovered": 1, 00:16:51.132 "num_base_bdevs_operational": 1, 00:16:51.132 "base_bdevs_list": [ 00:16:51.132 { 00:16:51.132 "name": null, 00:16:51.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.132 "is_configured": false, 00:16:51.132 "data_offset": 256, 00:16:51.132 "data_size": 7936 00:16:51.132 }, 00:16:51.132 { 00:16:51.132 "name": "pt2", 00:16:51.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.132 "is_configured": true, 00:16:51.132 "data_offset": 256, 00:16:51.132 "data_size": 7936 00:16:51.132 } 00:16:51.132 ] 00:16:51.132 }' 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.132 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.392 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.392 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.392 13:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.392 [2024-11-20 13:30:33.003125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.392 [2024-11-20 13:30:33.003234] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.392 [2024-11-20 13:30:33.003344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.392 [2024-11-20 13:30:33.003402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.392 [2024-11-20 13:30:33.003418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:16:51.393 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.393 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.393 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.393 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.393 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:51.393 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.681 [2024-11-20 13:30:33.067094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.681 [2024-11-20 13:30:33.067180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.681 [2024-11-20 13:30:33.067203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:51.681 [2024-11-20 13:30:33.067217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.681 [2024-11-20 13:30:33.069419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.681 [2024-11-20 13:30:33.069465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.681 [2024-11-20 13:30:33.069531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:51.681 [2024-11-20 13:30:33.069570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:51.681 [2024-11-20 13:30:33.069668] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:51.681 [2024-11-20 13:30:33.069689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.681 [2024-11-20 13:30:33.069711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:16:51.681 [2024-11-20 13:30:33.069756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.681 [2024-11-20 13:30:33.069830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:16:51.681 [2024-11-20 13:30:33.069843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:51.681 [2024-11-20 13:30:33.069940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:51.681 [2024-11-20 13:30:33.070013] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:16:51.681 [2024-11-20 13:30:33.070026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:16:51.681 [2024-11-20 13:30:33.070101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.681 pt1 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.681 "name": "raid_bdev1", 00:16:51.681 "uuid": "8feea0a5-fb95-4958-8314-cbc7fc6bc8d1", 00:16:51.681 "strip_size_kb": 0, 00:16:51.681 "state": "online", 00:16:51.681 "raid_level": "raid1", 00:16:51.681 "superblock": true, 00:16:51.681 "num_base_bdevs": 2, 00:16:51.681 "num_base_bdevs_discovered": 1, 00:16:51.681 "num_base_bdevs_operational": 1, 00:16:51.681 "base_bdevs_list": [ 00:16:51.681 { 00:16:51.681 "name": null, 00:16:51.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.681 "is_configured": false, 00:16:51.681 "data_offset": 256, 00:16:51.681 "data_size": 7936 00:16:51.681 }, 00:16:51.681 { 00:16:51.681 "name": "pt2", 00:16:51.681 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.681 "is_configured": true, 00:16:51.681 "data_offset": 256, 00:16:51.681 "data_size": 7936 00:16:51.681 } 00:16:51.681 ] 00:16:51.681 }' 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.681 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.942 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:51.942 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.942 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.942 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:51.942 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.942 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:51.942 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.942 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.942 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.942 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:51.942 [2024-11-20 13:30:33.606448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 8feea0a5-fb95-4958-8314-cbc7fc6bc8d1 '!=' 8feea0a5-fb95-4958-8314-cbc7fc6bc8d1 ']' 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 98790 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 98790 ']' 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 98790 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98790 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98790' 00:16:52.202 killing process with pid 98790 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 98790 00:16:52.202 [2024-11-20 13:30:33.694746] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.202 [2024-11-20 13:30:33.694923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.202 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 98790 00:16:52.202 [2024-11-20 13:30:33.695032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.202 [2024-11-20 13:30:33.695088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:16:52.202 [2024-11-20 13:30:33.720297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.465 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:52.465 00:16:52.465 real 0m5.195s 00:16:52.465 user 0m8.536s 00:16:52.465 sys 0m1.112s 00:16:52.465 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.465 13:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.465 ************************************ 00:16:52.466 END TEST raid_superblock_test_md_interleaved 00:16:52.466 ************************************ 00:16:52.466 13:30:33 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:52.466 13:30:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:52.466 13:30:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.466 13:30:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.466 ************************************ 00:16:52.466 START TEST raid_rebuild_test_sb_md_interleaved 00:16:52.466 ************************************ 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.466 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99107 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99107 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 99107 ']' 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.467 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.468 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.468 13:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.468 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:52.468 Zero copy mechanism will not be used. 00:16:52.468 [2024-11-20 13:30:34.108883] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:52.468 [2024-11-20 13:30:34.109029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99107 ] 00:16:52.727 [2024-11-20 13:30:34.266677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.727 [2024-11-20 13:30:34.297272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.727 [2024-11-20 13:30:34.342107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.727 [2024-11-20 13:30:34.342230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.664 BaseBdev1_malloc 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.664 [2024-11-20 13:30:35.062099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:53.664 [2024-11-20 13:30:35.062173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.664 [2024-11-20 13:30:35.062223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:53.664 [2024-11-20 13:30:35.062235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.664 [2024-11-20 13:30:35.064396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.664 [2024-11-20 13:30:35.064506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:53.664 BaseBdev1 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:53.664 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.665 BaseBdev2_malloc 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.665 [2024-11-20 13:30:35.091433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:53.665 [2024-11-20 13:30:35.091596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.665 [2024-11-20 13:30:35.091630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:53.665 [2024-11-20 13:30:35.091643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.665 [2024-11-20 13:30:35.093924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.665 [2024-11-20 13:30:35.093973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:53.665 BaseBdev2 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.665 spare_malloc 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.665 spare_delay 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.665 [2024-11-20 13:30:35.124842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:53.665 [2024-11-20 13:30:35.124922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.665 [2024-11-20 13:30:35.124954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:53.665 [2024-11-20 13:30:35.124965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.665 [2024-11-20 13:30:35.127300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.665 [2024-11-20 13:30:35.127347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:53.665 spare 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.665 [2024-11-20 13:30:35.132894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.665 [2024-11-20 13:30:35.135028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.665 [2024-11-20 13:30:35.135234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:53.665 [2024-11-20 13:30:35.135249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:53.665 [2024-11-20 13:30:35.135377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:53.665 [2024-11-20 13:30:35.135498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:53.665 [2024-11-20 13:30:35.135514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:53.665 [2024-11-20 13:30:35.135621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.665 "name": "raid_bdev1", 00:16:53.665 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:16:53.665 "strip_size_kb": 0, 00:16:53.665 "state": "online", 00:16:53.665 "raid_level": "raid1", 00:16:53.665 "superblock": true, 00:16:53.665 "num_base_bdevs": 2, 00:16:53.665 "num_base_bdevs_discovered": 2, 00:16:53.665 "num_base_bdevs_operational": 2, 00:16:53.665 "base_bdevs_list": [ 00:16:53.665 { 00:16:53.665 "name": "BaseBdev1", 00:16:53.665 "uuid": "c923322a-27be-59d8-b287-1f864e999512", 00:16:53.665 "is_configured": true, 00:16:53.665 "data_offset": 256, 00:16:53.665 "data_size": 7936 00:16:53.665 }, 00:16:53.665 { 00:16:53.665 "name": "BaseBdev2", 00:16:53.665 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:16:53.665 "is_configured": true, 00:16:53.665 "data_offset": 256, 00:16:53.665 "data_size": 7936 00:16:53.665 } 00:16:53.665 ] 00:16:53.665 }' 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.665 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.233 [2024-11-20 13:30:35.604401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.233 [2024-11-20 13:30:35.683934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.233 "name": "raid_bdev1", 00:16:54.233 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:16:54.233 "strip_size_kb": 0, 00:16:54.233 "state": "online", 00:16:54.233 "raid_level": "raid1", 00:16:54.233 "superblock": true, 00:16:54.233 "num_base_bdevs": 2, 00:16:54.233 "num_base_bdevs_discovered": 1, 00:16:54.233 "num_base_bdevs_operational": 1, 00:16:54.233 "base_bdevs_list": [ 00:16:54.233 { 00:16:54.233 "name": null, 00:16:54.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.233 "is_configured": false, 00:16:54.233 "data_offset": 0, 00:16:54.233 "data_size": 7936 00:16:54.233 }, 00:16:54.233 { 00:16:54.233 "name": "BaseBdev2", 00:16:54.233 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:16:54.233 "is_configured": true, 00:16:54.233 "data_offset": 256, 00:16:54.233 "data_size": 7936 00:16:54.233 } 00:16:54.233 ] 00:16:54.233 }' 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.233 13:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.802 13:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:54.802 13:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.802 13:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.802 [2024-11-20 13:30:36.183172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.802 [2024-11-20 13:30:36.200039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:54.802 13:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.802 13:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:54.802 [2024-11-20 13:30:36.202657] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:55.742 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.742 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.742 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.742 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.742 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.742 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.742 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.742 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.742 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.742 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.742 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.742 "name": "raid_bdev1", 00:16:55.742 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:16:55.742 "strip_size_kb": 0, 00:16:55.742 "state": "online", 00:16:55.742 "raid_level": "raid1", 00:16:55.742 "superblock": true, 00:16:55.742 "num_base_bdevs": 2, 00:16:55.742 "num_base_bdevs_discovered": 2, 00:16:55.742 "num_base_bdevs_operational": 2, 00:16:55.742 "process": { 00:16:55.742 "type": "rebuild", 00:16:55.742 "target": "spare", 00:16:55.742 "progress": { 00:16:55.742 "blocks": 2560, 00:16:55.742 "percent": 32 00:16:55.742 } 00:16:55.742 }, 00:16:55.742 "base_bdevs_list": [ 00:16:55.742 { 00:16:55.742 "name": "spare", 00:16:55.742 "uuid": "3b531a83-ba3d-5d31-8432-2614edb06e9f", 00:16:55.742 "is_configured": true, 00:16:55.742 "data_offset": 256, 00:16:55.742 "data_size": 7936 00:16:55.742 }, 00:16:55.742 { 00:16:55.742 "name": "BaseBdev2", 00:16:55.742 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:16:55.742 "is_configured": true, 00:16:55.743 "data_offset": 256, 00:16:55.743 "data_size": 7936 00:16:55.743 } 00:16:55.743 ] 00:16:55.743 }' 00:16:55.743 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.743 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.743 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.743 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.743 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:55.743 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.743 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.743 [2024-11-20 13:30:37.362145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:56.003 [2024-11-20 13:30:37.409369] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:56.003 [2024-11-20 13:30:37.409474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.003 [2024-11-20 13:30:37.409498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:56.003 [2024-11-20 13:30:37.409508] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.003 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.003 "name": "raid_bdev1", 00:16:56.003 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:16:56.003 "strip_size_kb": 0, 00:16:56.003 "state": "online", 00:16:56.003 "raid_level": "raid1", 00:16:56.003 "superblock": true, 00:16:56.003 "num_base_bdevs": 2, 00:16:56.003 "num_base_bdevs_discovered": 1, 00:16:56.003 "num_base_bdevs_operational": 1, 00:16:56.003 "base_bdevs_list": [ 00:16:56.003 { 00:16:56.003 "name": null, 00:16:56.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.003 "is_configured": false, 00:16:56.003 "data_offset": 0, 00:16:56.003 "data_size": 7936 00:16:56.003 }, 00:16:56.003 { 00:16:56.003 "name": "BaseBdev2", 00:16:56.003 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:16:56.003 "is_configured": true, 00:16:56.003 "data_offset": 256, 00:16:56.003 "data_size": 7936 00:16:56.003 } 00:16:56.003 ] 00:16:56.003 }' 00:16:56.004 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.004 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.264 "name": "raid_bdev1", 00:16:56.264 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:16:56.264 "strip_size_kb": 0, 00:16:56.264 "state": "online", 00:16:56.264 "raid_level": "raid1", 00:16:56.264 "superblock": true, 00:16:56.264 "num_base_bdevs": 2, 00:16:56.264 "num_base_bdevs_discovered": 1, 00:16:56.264 "num_base_bdevs_operational": 1, 00:16:56.264 "base_bdevs_list": [ 00:16:56.264 { 00:16:56.264 "name": null, 00:16:56.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.264 "is_configured": false, 00:16:56.264 "data_offset": 0, 00:16:56.264 "data_size": 7936 00:16:56.264 }, 00:16:56.264 { 00:16:56.264 "name": "BaseBdev2", 00:16:56.264 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:16:56.264 "is_configured": true, 00:16:56.264 "data_offset": 256, 00:16:56.264 "data_size": 7936 00:16:56.264 } 00:16:56.264 ] 00:16:56.264 }' 00:16:56.264 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.524 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:56.524 13:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.524 13:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:56.524 13:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:56.524 13:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.524 13:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.524 [2024-11-20 13:30:38.009317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.524 [2024-11-20 13:30:38.013357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:56.524 13:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.524 13:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:56.524 [2024-11-20 13:30:38.015635] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.463 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.463 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.463 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.463 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.463 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.463 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.463 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.463 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.463 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.463 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.463 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.463 "name": "raid_bdev1", 00:16:57.463 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:16:57.463 "strip_size_kb": 0, 00:16:57.463 "state": "online", 00:16:57.463 "raid_level": "raid1", 00:16:57.463 "superblock": true, 00:16:57.463 "num_base_bdevs": 2, 00:16:57.463 "num_base_bdevs_discovered": 2, 00:16:57.463 "num_base_bdevs_operational": 2, 00:16:57.463 "process": { 00:16:57.463 "type": "rebuild", 00:16:57.463 "target": "spare", 00:16:57.463 "progress": { 00:16:57.463 "blocks": 2560, 00:16:57.463 "percent": 32 00:16:57.463 } 00:16:57.463 }, 00:16:57.463 "base_bdevs_list": [ 00:16:57.463 { 00:16:57.463 "name": "spare", 00:16:57.463 "uuid": "3b531a83-ba3d-5d31-8432-2614edb06e9f", 00:16:57.463 "is_configured": true, 00:16:57.464 "data_offset": 256, 00:16:57.464 "data_size": 7936 00:16:57.464 }, 00:16:57.464 { 00:16:57.464 "name": "BaseBdev2", 00:16:57.464 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:16:57.464 "is_configured": true, 00:16:57.464 "data_offset": 256, 00:16:57.464 "data_size": 7936 00:16:57.464 } 00:16:57.464 ] 00:16:57.464 }' 00:16:57.464 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.464 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.464 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.724 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.724 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:57.724 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:57.724 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:57.724 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:57.724 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:57.724 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:57.724 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=628 00:16:57.724 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.725 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.725 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.725 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.725 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.725 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.725 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.725 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.725 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.725 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.725 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.725 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.725 "name": "raid_bdev1", 00:16:57.725 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:16:57.725 "strip_size_kb": 0, 00:16:57.725 "state": "online", 00:16:57.725 "raid_level": "raid1", 00:16:57.725 "superblock": true, 00:16:57.725 "num_base_bdevs": 2, 00:16:57.725 "num_base_bdevs_discovered": 2, 00:16:57.725 "num_base_bdevs_operational": 2, 00:16:57.725 "process": { 00:16:57.725 "type": "rebuild", 00:16:57.725 "target": "spare", 00:16:57.725 "progress": { 00:16:57.725 "blocks": 2816, 00:16:57.725 "percent": 35 00:16:57.725 } 00:16:57.725 }, 00:16:57.725 "base_bdevs_list": [ 00:16:57.725 { 00:16:57.725 "name": "spare", 00:16:57.725 "uuid": "3b531a83-ba3d-5d31-8432-2614edb06e9f", 00:16:57.725 "is_configured": true, 00:16:57.725 "data_offset": 256, 00:16:57.725 "data_size": 7936 00:16:57.725 }, 00:16:57.725 { 00:16:57.725 "name": "BaseBdev2", 00:16:57.725 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:16:57.726 "is_configured": true, 00:16:57.726 "data_offset": 256, 00:16:57.726 "data_size": 7936 00:16:57.726 } 00:16:57.726 ] 00:16:57.726 }' 00:16:57.726 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.726 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.726 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.726 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.726 13:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.107 "name": "raid_bdev1", 00:16:59.107 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:16:59.107 "strip_size_kb": 0, 00:16:59.107 "state": "online", 00:16:59.107 "raid_level": "raid1", 00:16:59.107 "superblock": true, 00:16:59.107 "num_base_bdevs": 2, 00:16:59.107 "num_base_bdevs_discovered": 2, 00:16:59.107 "num_base_bdevs_operational": 2, 00:16:59.107 "process": { 00:16:59.107 "type": "rebuild", 00:16:59.107 "target": "spare", 00:16:59.107 "progress": { 00:16:59.107 "blocks": 5888, 00:16:59.107 "percent": 74 00:16:59.107 } 00:16:59.107 }, 00:16:59.107 "base_bdevs_list": [ 00:16:59.107 { 00:16:59.107 "name": "spare", 00:16:59.107 "uuid": "3b531a83-ba3d-5d31-8432-2614edb06e9f", 00:16:59.107 "is_configured": true, 00:16:59.107 "data_offset": 256, 00:16:59.107 "data_size": 7936 00:16:59.107 }, 00:16:59.107 { 00:16:59.107 "name": "BaseBdev2", 00:16:59.107 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:16:59.107 "is_configured": true, 00:16:59.107 "data_offset": 256, 00:16:59.107 "data_size": 7936 00:16:59.107 } 00:16:59.107 ] 00:16:59.107 }' 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.107 13:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.675 [2024-11-20 13:30:41.130519] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:59.675 [2024-11-20 13:30:41.130721] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:59.675 [2024-11-20 13:30:41.130897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.934 "name": "raid_bdev1", 00:16:59.934 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:16:59.934 "strip_size_kb": 0, 00:16:59.934 "state": "online", 00:16:59.934 "raid_level": "raid1", 00:16:59.934 "superblock": true, 00:16:59.934 "num_base_bdevs": 2, 00:16:59.934 "num_base_bdevs_discovered": 2, 00:16:59.934 "num_base_bdevs_operational": 2, 00:16:59.934 "base_bdevs_list": [ 00:16:59.934 { 00:16:59.934 "name": "spare", 00:16:59.934 "uuid": "3b531a83-ba3d-5d31-8432-2614edb06e9f", 00:16:59.934 "is_configured": true, 00:16:59.934 "data_offset": 256, 00:16:59.934 "data_size": 7936 00:16:59.934 }, 00:16:59.934 { 00:16:59.934 "name": "BaseBdev2", 00:16:59.934 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:16:59.934 "is_configured": true, 00:16:59.934 "data_offset": 256, 00:16:59.934 "data_size": 7936 00:16:59.934 } 00:16:59.934 ] 00:16:59.934 }' 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:59.934 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.194 "name": "raid_bdev1", 00:17:00.194 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:00.194 "strip_size_kb": 0, 00:17:00.194 "state": "online", 00:17:00.194 "raid_level": "raid1", 00:17:00.194 "superblock": true, 00:17:00.194 "num_base_bdevs": 2, 00:17:00.194 "num_base_bdevs_discovered": 2, 00:17:00.194 "num_base_bdevs_operational": 2, 00:17:00.194 "base_bdevs_list": [ 00:17:00.194 { 00:17:00.194 "name": "spare", 00:17:00.194 "uuid": "3b531a83-ba3d-5d31-8432-2614edb06e9f", 00:17:00.194 "is_configured": true, 00:17:00.194 "data_offset": 256, 00:17:00.194 "data_size": 7936 00:17:00.194 }, 00:17:00.194 { 00:17:00.194 "name": "BaseBdev2", 00:17:00.194 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:00.194 "is_configured": true, 00:17:00.194 "data_offset": 256, 00:17:00.194 "data_size": 7936 00:17:00.194 } 00:17:00.194 ] 00:17:00.194 }' 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.194 "name": "raid_bdev1", 00:17:00.194 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:00.194 "strip_size_kb": 0, 00:17:00.194 "state": "online", 00:17:00.194 "raid_level": "raid1", 00:17:00.194 "superblock": true, 00:17:00.194 "num_base_bdevs": 2, 00:17:00.194 "num_base_bdevs_discovered": 2, 00:17:00.194 "num_base_bdevs_operational": 2, 00:17:00.194 "base_bdevs_list": [ 00:17:00.194 { 00:17:00.194 "name": "spare", 00:17:00.194 "uuid": "3b531a83-ba3d-5d31-8432-2614edb06e9f", 00:17:00.194 "is_configured": true, 00:17:00.194 "data_offset": 256, 00:17:00.194 "data_size": 7936 00:17:00.194 }, 00:17:00.194 { 00:17:00.194 "name": "BaseBdev2", 00:17:00.194 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:00.194 "is_configured": true, 00:17:00.194 "data_offset": 256, 00:17:00.194 "data_size": 7936 00:17:00.194 } 00:17:00.194 ] 00:17:00.194 }' 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.194 13:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.764 [2024-11-20 13:30:42.201497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.764 [2024-11-20 13:30:42.201609] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.764 [2024-11-20 13:30:42.201766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.764 [2024-11-20 13:30:42.201879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.764 [2024-11-20 13:30:42.201949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.764 [2024-11-20 13:30:42.273364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:00.764 [2024-11-20 13:30:42.273447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.764 [2024-11-20 13:30:42.273473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:00.764 [2024-11-20 13:30:42.273488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.764 [2024-11-20 13:30:42.275811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.764 [2024-11-20 13:30:42.275919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:00.764 [2024-11-20 13:30:42.276014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:00.764 [2024-11-20 13:30:42.276075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.764 [2024-11-20 13:30:42.276188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:00.764 spare 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.764 [2024-11-20 13:30:42.376112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:17:00.764 [2024-11-20 13:30:42.376166] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:00.764 [2024-11-20 13:30:42.376323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:00.764 [2024-11-20 13:30:42.376450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:17:00.764 [2024-11-20 13:30:42.376465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:17:00.764 [2024-11-20 13:30:42.376586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.764 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.765 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.024 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.024 "name": "raid_bdev1", 00:17:01.024 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:01.024 "strip_size_kb": 0, 00:17:01.024 "state": "online", 00:17:01.024 "raid_level": "raid1", 00:17:01.024 "superblock": true, 00:17:01.024 "num_base_bdevs": 2, 00:17:01.024 "num_base_bdevs_discovered": 2, 00:17:01.024 "num_base_bdevs_operational": 2, 00:17:01.024 "base_bdevs_list": [ 00:17:01.024 { 00:17:01.024 "name": "spare", 00:17:01.024 "uuid": "3b531a83-ba3d-5d31-8432-2614edb06e9f", 00:17:01.024 "is_configured": true, 00:17:01.024 "data_offset": 256, 00:17:01.024 "data_size": 7936 00:17:01.024 }, 00:17:01.024 { 00:17:01.024 "name": "BaseBdev2", 00:17:01.024 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:01.024 "is_configured": true, 00:17:01.024 "data_offset": 256, 00:17:01.024 "data_size": 7936 00:17:01.024 } 00:17:01.024 ] 00:17:01.024 }' 00:17:01.024 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.024 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.282 "name": "raid_bdev1", 00:17:01.282 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:01.282 "strip_size_kb": 0, 00:17:01.282 "state": "online", 00:17:01.282 "raid_level": "raid1", 00:17:01.282 "superblock": true, 00:17:01.282 "num_base_bdevs": 2, 00:17:01.282 "num_base_bdevs_discovered": 2, 00:17:01.282 "num_base_bdevs_operational": 2, 00:17:01.282 "base_bdevs_list": [ 00:17:01.282 { 00:17:01.282 "name": "spare", 00:17:01.282 "uuid": "3b531a83-ba3d-5d31-8432-2614edb06e9f", 00:17:01.282 "is_configured": true, 00:17:01.282 "data_offset": 256, 00:17:01.282 "data_size": 7936 00:17:01.282 }, 00:17:01.282 { 00:17:01.282 "name": "BaseBdev2", 00:17:01.282 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:01.282 "is_configured": true, 00:17:01.282 "data_offset": 256, 00:17:01.282 "data_size": 7936 00:17:01.282 } 00:17:01.282 ] 00:17:01.282 }' 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.282 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.540 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.540 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.540 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.540 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.540 13:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.540 [2024-11-20 13:30:43.052233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.540 "name": "raid_bdev1", 00:17:01.540 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:01.540 "strip_size_kb": 0, 00:17:01.540 "state": "online", 00:17:01.540 "raid_level": "raid1", 00:17:01.540 "superblock": true, 00:17:01.540 "num_base_bdevs": 2, 00:17:01.540 "num_base_bdevs_discovered": 1, 00:17:01.540 "num_base_bdevs_operational": 1, 00:17:01.540 "base_bdevs_list": [ 00:17:01.540 { 00:17:01.540 "name": null, 00:17:01.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.540 "is_configured": false, 00:17:01.540 "data_offset": 0, 00:17:01.540 "data_size": 7936 00:17:01.540 }, 00:17:01.540 { 00:17:01.540 "name": "BaseBdev2", 00:17:01.540 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:01.540 "is_configured": true, 00:17:01.540 "data_offset": 256, 00:17:01.540 "data_size": 7936 00:17:01.540 } 00:17:01.540 ] 00:17:01.540 }' 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.540 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.109 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.109 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.109 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.109 [2024-11-20 13:30:43.507582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.109 [2024-11-20 13:30:43.507821] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:02.109 [2024-11-20 13:30:43.507838] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:02.109 [2024-11-20 13:30:43.507882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.109 [2024-11-20 13:30:43.511759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:17:02.109 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.109 13:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:02.109 [2024-11-20 13:30:43.514149] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.074 "name": "raid_bdev1", 00:17:03.074 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:03.074 "strip_size_kb": 0, 00:17:03.074 "state": "online", 00:17:03.074 "raid_level": "raid1", 00:17:03.074 "superblock": true, 00:17:03.074 "num_base_bdevs": 2, 00:17:03.074 "num_base_bdevs_discovered": 2, 00:17:03.074 "num_base_bdevs_operational": 2, 00:17:03.074 "process": { 00:17:03.074 "type": "rebuild", 00:17:03.074 "target": "spare", 00:17:03.074 "progress": { 00:17:03.074 "blocks": 2560, 00:17:03.074 "percent": 32 00:17:03.074 } 00:17:03.074 }, 00:17:03.074 "base_bdevs_list": [ 00:17:03.074 { 00:17:03.074 "name": "spare", 00:17:03.074 "uuid": "3b531a83-ba3d-5d31-8432-2614edb06e9f", 00:17:03.074 "is_configured": true, 00:17:03.074 "data_offset": 256, 00:17:03.074 "data_size": 7936 00:17:03.074 }, 00:17:03.074 { 00:17:03.074 "name": "BaseBdev2", 00:17:03.074 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:03.074 "is_configured": true, 00:17:03.074 "data_offset": 256, 00:17:03.074 "data_size": 7936 00:17:03.074 } 00:17:03.074 ] 00:17:03.074 }' 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.074 [2024-11-20 13:30:44.679313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.074 [2024-11-20 13:30:44.719769] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:03.074 [2024-11-20 13:30:44.719864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.074 [2024-11-20 13:30:44.719885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.074 [2024-11-20 13:30:44.719893] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.074 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.334 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.334 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.334 "name": "raid_bdev1", 00:17:03.334 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:03.334 "strip_size_kb": 0, 00:17:03.334 "state": "online", 00:17:03.334 "raid_level": "raid1", 00:17:03.334 "superblock": true, 00:17:03.334 "num_base_bdevs": 2, 00:17:03.334 "num_base_bdevs_discovered": 1, 00:17:03.334 "num_base_bdevs_operational": 1, 00:17:03.334 "base_bdevs_list": [ 00:17:03.334 { 00:17:03.334 "name": null, 00:17:03.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.334 "is_configured": false, 00:17:03.334 "data_offset": 0, 00:17:03.334 "data_size": 7936 00:17:03.334 }, 00:17:03.334 { 00:17:03.334 "name": "BaseBdev2", 00:17:03.334 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:03.334 "is_configured": true, 00:17:03.334 "data_offset": 256, 00:17:03.334 "data_size": 7936 00:17:03.334 } 00:17:03.334 ] 00:17:03.334 }' 00:17:03.334 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.334 13:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.595 13:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:03.595 13:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.595 13:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.595 [2024-11-20 13:30:45.151605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:03.595 [2024-11-20 13:30:45.151746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.595 [2024-11-20 13:30:45.151806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:03.595 [2024-11-20 13:30:45.151855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.595 [2024-11-20 13:30:45.152125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.595 [2024-11-20 13:30:45.152185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:03.595 [2024-11-20 13:30:45.152296] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:03.595 [2024-11-20 13:30:45.152338] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:03.595 [2024-11-20 13:30:45.152394] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:03.595 [2024-11-20 13:30:45.152449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.595 [2024-11-20 13:30:45.156224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:17:03.595 spare 00:17:03.595 13:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.595 13:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:03.595 [2024-11-20 13:30:45.158348] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.535 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.535 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.535 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.535 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.535 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.535 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.535 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.535 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.535 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.535 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.795 "name": "raid_bdev1", 00:17:04.795 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:04.795 "strip_size_kb": 0, 00:17:04.795 "state": "online", 00:17:04.795 "raid_level": "raid1", 00:17:04.795 "superblock": true, 00:17:04.795 "num_base_bdevs": 2, 00:17:04.795 "num_base_bdevs_discovered": 2, 00:17:04.795 "num_base_bdevs_operational": 2, 00:17:04.795 "process": { 00:17:04.795 "type": "rebuild", 00:17:04.795 "target": "spare", 00:17:04.795 "progress": { 00:17:04.795 "blocks": 2560, 00:17:04.795 "percent": 32 00:17:04.795 } 00:17:04.795 }, 00:17:04.795 "base_bdevs_list": [ 00:17:04.795 { 00:17:04.795 "name": "spare", 00:17:04.795 "uuid": "3b531a83-ba3d-5d31-8432-2614edb06e9f", 00:17:04.795 "is_configured": true, 00:17:04.795 "data_offset": 256, 00:17:04.795 "data_size": 7936 00:17:04.795 }, 00:17:04.795 { 00:17:04.795 "name": "BaseBdev2", 00:17:04.795 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:04.795 "is_configured": true, 00:17:04.795 "data_offset": 256, 00:17:04.795 "data_size": 7936 00:17:04.795 } 00:17:04.795 ] 00:17:04.795 }' 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.795 [2024-11-20 13:30:46.318649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.795 [2024-11-20 13:30:46.363634] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:04.795 [2024-11-20 13:30:46.363829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.795 [2024-11-20 13:30:46.363874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.795 [2024-11-20 13:30:46.363918] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.795 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.795 "name": "raid_bdev1", 00:17:04.795 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:04.795 "strip_size_kb": 0, 00:17:04.795 "state": "online", 00:17:04.795 "raid_level": "raid1", 00:17:04.795 "superblock": true, 00:17:04.795 "num_base_bdevs": 2, 00:17:04.795 "num_base_bdevs_discovered": 1, 00:17:04.795 "num_base_bdevs_operational": 1, 00:17:04.795 "base_bdevs_list": [ 00:17:04.795 { 00:17:04.795 "name": null, 00:17:04.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.795 "is_configured": false, 00:17:04.795 "data_offset": 0, 00:17:04.795 "data_size": 7936 00:17:04.795 }, 00:17:04.795 { 00:17:04.795 "name": "BaseBdev2", 00:17:04.795 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:04.795 "is_configured": true, 00:17:04.795 "data_offset": 256, 00:17:04.795 "data_size": 7936 00:17:04.796 } 00:17:04.796 ] 00:17:04.796 }' 00:17:04.796 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.796 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.365 "name": "raid_bdev1", 00:17:05.365 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:05.365 "strip_size_kb": 0, 00:17:05.365 "state": "online", 00:17:05.365 "raid_level": "raid1", 00:17:05.365 "superblock": true, 00:17:05.365 "num_base_bdevs": 2, 00:17:05.365 "num_base_bdevs_discovered": 1, 00:17:05.365 "num_base_bdevs_operational": 1, 00:17:05.365 "base_bdevs_list": [ 00:17:05.365 { 00:17:05.365 "name": null, 00:17:05.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.365 "is_configured": false, 00:17:05.365 "data_offset": 0, 00:17:05.365 "data_size": 7936 00:17:05.365 }, 00:17:05.365 { 00:17:05.365 "name": "BaseBdev2", 00:17:05.365 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:05.365 "is_configured": true, 00:17:05.365 "data_offset": 256, 00:17:05.365 "data_size": 7936 00:17:05.365 } 00:17:05.365 ] 00:17:05.365 }' 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.365 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.365 [2024-11-20 13:30:46.963218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:05.365 [2024-11-20 13:30:46.963279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.365 [2024-11-20 13:30:46.963299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:05.365 [2024-11-20 13:30:46.963310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.365 [2024-11-20 13:30:46.963485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.365 [2024-11-20 13:30:46.963501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:05.366 [2024-11-20 13:30:46.963551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:05.366 [2024-11-20 13:30:46.963569] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:05.366 [2024-11-20 13:30:46.963577] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:05.366 [2024-11-20 13:30:46.963590] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:05.366 BaseBdev1 00:17:05.366 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.366 13:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.747 13:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.747 "name": "raid_bdev1", 00:17:06.747 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:06.747 "strip_size_kb": 0, 00:17:06.747 "state": "online", 00:17:06.747 "raid_level": "raid1", 00:17:06.747 "superblock": true, 00:17:06.747 "num_base_bdevs": 2, 00:17:06.747 "num_base_bdevs_discovered": 1, 00:17:06.747 "num_base_bdevs_operational": 1, 00:17:06.747 "base_bdevs_list": [ 00:17:06.747 { 00:17:06.747 "name": null, 00:17:06.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.747 "is_configured": false, 00:17:06.747 "data_offset": 0, 00:17:06.747 "data_size": 7936 00:17:06.747 }, 00:17:06.747 { 00:17:06.747 "name": "BaseBdev2", 00:17:06.747 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:06.747 "is_configured": true, 00:17:06.747 "data_offset": 256, 00:17:06.747 "data_size": 7936 00:17:06.747 } 00:17:06.747 ] 00:17:06.747 }' 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.747 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.007 "name": "raid_bdev1", 00:17:07.007 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:07.007 "strip_size_kb": 0, 00:17:07.007 "state": "online", 00:17:07.007 "raid_level": "raid1", 00:17:07.007 "superblock": true, 00:17:07.007 "num_base_bdevs": 2, 00:17:07.007 "num_base_bdevs_discovered": 1, 00:17:07.007 "num_base_bdevs_operational": 1, 00:17:07.007 "base_bdevs_list": [ 00:17:07.007 { 00:17:07.007 "name": null, 00:17:07.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.007 "is_configured": false, 00:17:07.007 "data_offset": 0, 00:17:07.007 "data_size": 7936 00:17:07.007 }, 00:17:07.007 { 00:17:07.007 "name": "BaseBdev2", 00:17:07.007 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:07.007 "is_configured": true, 00:17:07.007 "data_offset": 256, 00:17:07.007 "data_size": 7936 00:17:07.007 } 00:17:07.007 ] 00:17:07.007 }' 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.007 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.008 [2024-11-20 13:30:48.540612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.008 [2024-11-20 13:30:48.540856] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:07.008 [2024-11-20 13:30:48.540874] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:07.008 request: 00:17:07.008 { 00:17:07.008 "base_bdev": "BaseBdev1", 00:17:07.008 "raid_bdev": "raid_bdev1", 00:17:07.008 "method": "bdev_raid_add_base_bdev", 00:17:07.008 "req_id": 1 00:17:07.008 } 00:17:07.008 Got JSON-RPC error response 00:17:07.008 response: 00:17:07.008 { 00:17:07.008 "code": -22, 00:17:07.008 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:07.008 } 00:17:07.008 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:07.008 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:07.008 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.008 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.008 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.008 13:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.948 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.948 "name": "raid_bdev1", 00:17:07.948 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:07.948 "strip_size_kb": 0, 00:17:07.948 "state": "online", 00:17:07.948 "raid_level": "raid1", 00:17:07.948 "superblock": true, 00:17:07.948 "num_base_bdevs": 2, 00:17:07.948 "num_base_bdevs_discovered": 1, 00:17:07.948 "num_base_bdevs_operational": 1, 00:17:07.948 "base_bdevs_list": [ 00:17:07.948 { 00:17:07.948 "name": null, 00:17:07.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.948 "is_configured": false, 00:17:07.948 "data_offset": 0, 00:17:07.948 "data_size": 7936 00:17:07.948 }, 00:17:07.948 { 00:17:07.948 "name": "BaseBdev2", 00:17:07.948 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:07.948 "is_configured": true, 00:17:07.948 "data_offset": 256, 00:17:07.948 "data_size": 7936 00:17:07.948 } 00:17:07.948 ] 00:17:07.948 }' 00:17:08.208 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.208 13:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.468 "name": "raid_bdev1", 00:17:08.468 "uuid": "e470c3d3-50e0-40b1-a8ef-d9f8fdf0cd15", 00:17:08.468 "strip_size_kb": 0, 00:17:08.468 "state": "online", 00:17:08.468 "raid_level": "raid1", 00:17:08.468 "superblock": true, 00:17:08.468 "num_base_bdevs": 2, 00:17:08.468 "num_base_bdevs_discovered": 1, 00:17:08.468 "num_base_bdevs_operational": 1, 00:17:08.468 "base_bdevs_list": [ 00:17:08.468 { 00:17:08.468 "name": null, 00:17:08.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.468 "is_configured": false, 00:17:08.468 "data_offset": 0, 00:17:08.468 "data_size": 7936 00:17:08.468 }, 00:17:08.468 { 00:17:08.468 "name": "BaseBdev2", 00:17:08.468 "uuid": "a1c4ba18-5711-5327-9509-47c0f73868ab", 00:17:08.468 "is_configured": true, 00:17:08.468 "data_offset": 256, 00:17:08.468 "data_size": 7936 00:17:08.468 } 00:17:08.468 ] 00:17:08.468 }' 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.468 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99107 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 99107 ']' 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 99107 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99107 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99107' 00:17:08.727 killing process with pid 99107 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 99107 00:17:08.727 Received shutdown signal, test time was about 60.000000 seconds 00:17:08.727 00:17:08.727 Latency(us) 00:17:08.727 [2024-11-20T13:30:50.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.727 [2024-11-20T13:30:50.395Z] =================================================================================================================== 00:17:08.727 [2024-11-20T13:30:50.395Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:08.727 [2024-11-20 13:30:50.177646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.727 [2024-11-20 13:30:50.177793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.727 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 99107 00:17:08.727 [2024-11-20 13:30:50.177850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.727 [2024-11-20 13:30:50.177859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:17:08.727 [2024-11-20 13:30:50.211349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:09.052 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:09.052 00:17:09.052 real 0m16.394s 00:17:09.052 user 0m22.090s 00:17:09.052 sys 0m1.703s 00:17:09.052 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.052 ************************************ 00:17:09.052 END TEST raid_rebuild_test_sb_md_interleaved 00:17:09.052 ************************************ 00:17:09.052 13:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.052 13:30:50 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:09.052 13:30:50 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:09.052 13:30:50 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99107 ']' 00:17:09.052 13:30:50 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99107 00:17:09.052 13:30:50 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:09.052 ************************************ 00:17:09.052 00:17:09.052 real 10m8.772s 00:17:09.052 user 14m31.129s 00:17:09.052 sys 1m48.023s 00:17:09.052 13:30:50 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.052 13:30:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.052 END TEST bdev_raid 00:17:09.052 ************************************ 00:17:09.052 13:30:50 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:09.052 13:30:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:09.052 13:30:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.052 13:30:50 -- common/autotest_common.sh@10 -- # set +x 00:17:09.052 ************************************ 00:17:09.052 START TEST spdkcli_raid 00:17:09.052 ************************************ 00:17:09.052 13:30:50 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:09.052 * Looking for test storage... 00:17:09.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:09.052 13:30:50 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:09.052 13:30:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:09.052 13:30:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:09.315 13:30:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:09.315 13:30:50 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.316 13:30:50 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.316 13:30:50 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.316 13:30:50 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:09.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.316 --rc genhtml_branch_coverage=1 00:17:09.316 --rc genhtml_function_coverage=1 00:17:09.316 --rc genhtml_legend=1 00:17:09.316 --rc geninfo_all_blocks=1 00:17:09.316 --rc geninfo_unexecuted_blocks=1 00:17:09.316 00:17:09.316 ' 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:09.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.316 --rc genhtml_branch_coverage=1 00:17:09.316 --rc genhtml_function_coverage=1 00:17:09.316 --rc genhtml_legend=1 00:17:09.316 --rc geninfo_all_blocks=1 00:17:09.316 --rc geninfo_unexecuted_blocks=1 00:17:09.316 00:17:09.316 ' 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:09.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.316 --rc genhtml_branch_coverage=1 00:17:09.316 --rc genhtml_function_coverage=1 00:17:09.316 --rc genhtml_legend=1 00:17:09.316 --rc geninfo_all_blocks=1 00:17:09.316 --rc geninfo_unexecuted_blocks=1 00:17:09.316 00:17:09.316 ' 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:09.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.316 --rc genhtml_branch_coverage=1 00:17:09.316 --rc genhtml_function_coverage=1 00:17:09.316 --rc genhtml_legend=1 00:17:09.316 --rc geninfo_all_blocks=1 00:17:09.316 --rc geninfo_unexecuted_blocks=1 00:17:09.316 00:17:09.316 ' 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:09.316 13:30:50 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=99777 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:09.316 13:30:50 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 99777 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 99777 ']' 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.316 13:30:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.316 [2024-11-20 13:30:50.926747] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:09.316 [2024-11-20 13:30:50.926949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99777 ] 00:17:09.576 [2024-11-20 13:30:51.061807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:09.576 [2024-11-20 13:30:51.089176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.576 [2024-11-20 13:30:51.089275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.147 13:30:51 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.147 13:30:51 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:17:10.147 13:30:51 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:10.147 13:30:51 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:10.147 13:30:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.406 13:30:51 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:10.406 13:30:51 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.406 13:30:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.406 13:30:51 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:10.406 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:10.406 ' 00:17:11.816 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:11.816 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:11.816 13:30:53 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:11.816 13:30:53 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:11.816 13:30:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.074 13:30:53 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:12.074 13:30:53 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.074 13:30:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.074 13:30:53 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:12.074 ' 00:17:13.012 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:13.271 13:30:54 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:13.271 13:30:54 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:13.271 13:30:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.271 13:30:54 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:13.271 13:30:54 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.271 13:30:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.271 13:30:54 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:13.271 13:30:54 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:13.839 13:30:55 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:13.839 13:30:55 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:13.839 13:30:55 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:13.839 13:30:55 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:13.839 13:30:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.839 13:30:55 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:13.839 13:30:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.839 13:30:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.839 13:30:55 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:13.839 ' 00:17:14.807 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:14.807 13:30:56 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:14.807 13:30:56 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:14.807 13:30:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.066 13:30:56 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:15.066 13:30:56 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.066 13:30:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.066 13:30:56 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:15.066 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:15.066 ' 00:17:16.444 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:17:16.444 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:17:16.444 13:30:57 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:17:16.444 13:30:57 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:16.444 13:30:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:16.444 13:30:58 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 99777 00:17:16.444 13:30:58 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 99777 ']' 00:17:16.444 13:30:58 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 99777 00:17:16.444 13:30:58 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:17:16.444 13:30:58 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.444 13:30:58 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99777 00:17:16.444 13:30:58 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.444 13:30:58 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.444 13:30:58 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99777' 00:17:16.444 killing process with pid 99777 00:17:16.444 13:30:58 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 99777 00:17:16.444 13:30:58 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 99777 00:17:17.009 13:30:58 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:17:17.010 13:30:58 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 99777 ']' 00:17:17.010 13:30:58 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 99777 00:17:17.010 Process with pid 99777 is not found 00:17:17.010 13:30:58 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 99777 ']' 00:17:17.010 13:30:58 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 99777 00:17:17.010 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (99777) - No such process 00:17:17.010 13:30:58 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 99777 is not found' 00:17:17.010 13:30:58 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:17.010 13:30:58 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:17.010 13:30:58 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:17.010 13:30:58 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:17.010 00:17:17.010 real 0m7.871s 00:17:17.010 user 0m16.814s 00:17:17.010 sys 0m1.088s 00:17:17.010 13:30:58 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.010 13:30:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.010 ************************************ 00:17:17.010 END TEST spdkcli_raid 00:17:17.010 ************************************ 00:17:17.010 13:30:58 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:17.010 13:30:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:17.010 13:30:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.010 13:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:17.010 ************************************ 00:17:17.010 START TEST blockdev_raid5f 00:17:17.010 ************************************ 00:17:17.010 13:30:58 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:17.010 * Looking for test storage... 00:17:17.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:17.010 13:30:58 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:17.010 13:30:58 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:17:17.010 13:30:58 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:17.268 13:30:58 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:17.268 13:30:58 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.268 13:30:58 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.269 13:30:58 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:17:17.269 13:30:58 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.269 13:30:58 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:17.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.269 --rc genhtml_branch_coverage=1 00:17:17.269 --rc genhtml_function_coverage=1 00:17:17.269 --rc genhtml_legend=1 00:17:17.269 --rc geninfo_all_blocks=1 00:17:17.269 --rc geninfo_unexecuted_blocks=1 00:17:17.269 00:17:17.269 ' 00:17:17.269 13:30:58 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:17.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.269 --rc genhtml_branch_coverage=1 00:17:17.269 --rc genhtml_function_coverage=1 00:17:17.269 --rc genhtml_legend=1 00:17:17.269 --rc geninfo_all_blocks=1 00:17:17.269 --rc geninfo_unexecuted_blocks=1 00:17:17.269 00:17:17.269 ' 00:17:17.269 13:30:58 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:17.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.269 --rc genhtml_branch_coverage=1 00:17:17.269 --rc genhtml_function_coverage=1 00:17:17.269 --rc genhtml_legend=1 00:17:17.269 --rc geninfo_all_blocks=1 00:17:17.269 --rc geninfo_unexecuted_blocks=1 00:17:17.269 00:17:17.269 ' 00:17:17.269 13:30:58 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:17.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.269 --rc genhtml_branch_coverage=1 00:17:17.269 --rc genhtml_function_coverage=1 00:17:17.269 --rc genhtml_legend=1 00:17:17.269 --rc geninfo_all_blocks=1 00:17:17.269 --rc geninfo_unexecuted_blocks=1 00:17:17.269 00:17:17.269 ' 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100030 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:17.269 13:30:58 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100030 00:17:17.269 13:30:58 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 100030 ']' 00:17:17.269 13:30:58 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.269 13:30:58 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.269 13:30:58 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.269 13:30:58 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.269 13:30:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:17.269 [2024-11-20 13:30:58.854366] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:17.269 [2024-11-20 13:30:58.854597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100030 ] 00:17:17.527 [2024-11-20 13:30:59.009889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.527 [2024-11-20 13:30:59.038578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.094 13:30:59 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.094 13:30:59 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:17:18.094 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:18.094 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:17:18.094 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:18.094 13:30:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.094 13:30:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.094 Malloc0 00:17:18.094 Malloc1 00:17:18.094 Malloc2 00:17:18.094 13:30:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.094 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:18.094 13:30:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.094 13:30:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7976d0a5-c538-479c-9c17-bac7813f0191"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7976d0a5-c538-479c-9c17-bac7813f0191",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7976d0a5-c538-479c-9c17-bac7813f0191",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "7969d5ea-ce79-4587-bb7e-9d8682a267b2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "277edb60-a88e-4d9c-9476-ac50a4ba1470",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f95cf58a-231c-44ae-8286-6a0bd2609edb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:18.353 13:30:59 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100030 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 100030 ']' 00:17:18.353 13:30:59 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 100030 00:17:18.354 13:30:59 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:17:18.354 13:30:59 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.354 13:30:59 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100030 00:17:18.354 killing process with pid 100030 00:17:18.354 13:30:59 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.354 13:30:59 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.354 13:30:59 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100030' 00:17:18.354 13:30:59 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 100030 00:17:18.354 13:30:59 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 100030 00:17:18.921 13:31:00 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:18.921 13:31:00 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:18.921 13:31:00 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:18.921 13:31:00 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.921 13:31:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.921 ************************************ 00:17:18.921 START TEST bdev_hello_world 00:17:18.921 ************************************ 00:17:18.921 13:31:00 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:18.921 [2024-11-20 13:31:00.490953] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:18.921 [2024-11-20 13:31:00.491236] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100076 ] 00:17:19.180 [2024-11-20 13:31:00.650051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.180 [2024-11-20 13:31:00.678742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.452 [2024-11-20 13:31:00.857973] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:19.452 [2024-11-20 13:31:00.858134] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:19.452 [2024-11-20 13:31:00.858200] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:19.452 [2024-11-20 13:31:00.858550] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:19.452 [2024-11-20 13:31:00.858705] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:19.452 [2024-11-20 13:31:00.858763] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:19.452 [2024-11-20 13:31:00.858823] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:19.452 00:17:19.452 [2024-11-20 13:31:00.858841] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:19.452 00:17:19.452 real 0m0.680s 00:17:19.452 user 0m0.367s 00:17:19.452 sys 0m0.206s 00:17:19.452 ************************************ 00:17:19.452 END TEST bdev_hello_world 00:17:19.452 ************************************ 00:17:19.452 13:31:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.452 13:31:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:19.711 13:31:01 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:19.711 13:31:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.711 13:31:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.711 13:31:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:19.711 ************************************ 00:17:19.711 START TEST bdev_bounds 00:17:19.711 ************************************ 00:17:19.711 13:31:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:19.711 13:31:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100106 00:17:19.711 13:31:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:19.711 13:31:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:19.711 13:31:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100106' 00:17:19.711 Process bdevio pid: 100106 00:17:19.711 13:31:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100106 00:17:19.712 13:31:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 100106 ']' 00:17:19.712 13:31:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.712 13:31:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.712 13:31:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.712 13:31:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.712 13:31:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:19.712 [2024-11-20 13:31:01.235573] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:19.712 [2024-11-20 13:31:01.235716] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100106 ] 00:17:19.972 [2024-11-20 13:31:01.394190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:19.972 [2024-11-20 13:31:01.425190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.972 [2024-11-20 13:31:01.425281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.972 [2024-11-20 13:31:01.425407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.540 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.540 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:20.540 13:31:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:20.540 I/O targets: 00:17:20.540 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:20.540 00:17:20.540 00:17:20.540 CUnit - A unit testing framework for C - Version 2.1-3 00:17:20.540 http://cunit.sourceforge.net/ 00:17:20.540 00:17:20.540 00:17:20.540 Suite: bdevio tests on: raid5f 00:17:20.540 Test: blockdev write read block ...passed 00:17:20.540 Test: blockdev write zeroes read block ...passed 00:17:20.800 Test: blockdev write zeroes read no split ...passed 00:17:20.800 Test: blockdev write zeroes read split ...passed 00:17:20.800 Test: blockdev write zeroes read split partial ...passed 00:17:20.800 Test: blockdev reset ...passed 00:17:20.800 Test: blockdev write read 8 blocks ...passed 00:17:20.800 Test: blockdev write read size > 128k ...passed 00:17:20.800 Test: blockdev write read invalid size ...passed 00:17:20.800 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:20.800 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:20.800 Test: blockdev write read max offset ...passed 00:17:20.800 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:20.800 Test: blockdev writev readv 8 blocks ...passed 00:17:20.800 Test: blockdev writev readv 30 x 1block ...passed 00:17:20.800 Test: blockdev writev readv block ...passed 00:17:20.800 Test: blockdev writev readv size > 128k ...passed 00:17:20.800 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:20.800 Test: blockdev comparev and writev ...passed 00:17:20.800 Test: blockdev nvme passthru rw ...passed 00:17:20.800 Test: blockdev nvme passthru vendor specific ...passed 00:17:20.800 Test: blockdev nvme admin passthru ...passed 00:17:20.800 Test: blockdev copy ...passed 00:17:20.800 00:17:20.800 Run Summary: Type Total Ran Passed Failed Inactive 00:17:20.800 suites 1 1 n/a 0 0 00:17:20.800 tests 23 23 23 0 0 00:17:20.800 asserts 130 130 130 0 n/a 00:17:20.800 00:17:20.800 Elapsed time = 0.381 seconds 00:17:20.800 0 00:17:20.800 13:31:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100106 00:17:20.800 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 100106 ']' 00:17:20.800 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 100106 00:17:20.800 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:20.800 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.800 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100106 00:17:20.800 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.800 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.800 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100106' 00:17:20.800 killing process with pid 100106 00:17:20.800 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 100106 00:17:20.800 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 100106 00:17:21.059 13:31:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:21.059 00:17:21.059 real 0m1.527s 00:17:21.059 user 0m3.810s 00:17:21.059 sys 0m0.328s 00:17:21.059 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.059 13:31:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:21.059 ************************************ 00:17:21.059 END TEST bdev_bounds 00:17:21.059 ************************************ 00:17:21.317 13:31:02 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:21.317 13:31:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:21.317 13:31:02 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.317 13:31:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:21.317 ************************************ 00:17:21.317 START TEST bdev_nbd 00:17:21.317 ************************************ 00:17:21.317 13:31:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:21.317 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:21.317 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:21.317 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.317 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:21.317 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:21.317 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:21.317 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100150 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100150 /var/tmp/spdk-nbd.sock 00:17:21.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 100150 ']' 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.318 13:31:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:21.318 [2024-11-20 13:31:02.837675] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:21.318 [2024-11-20 13:31:02.837808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.318 [2024-11-20 13:31:02.974886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.577 [2024-11-20 13:31:03.003309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.144 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.144 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:22.145 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.406 1+0 records in 00:17:22.406 1+0 records out 00:17:22.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285606 s, 14.3 MB/s 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:22.406 13:31:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:22.665 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:22.665 { 00:17:22.665 "nbd_device": "/dev/nbd0", 00:17:22.665 "bdev_name": "raid5f" 00:17:22.665 } 00:17:22.665 ]' 00:17:22.665 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:22.665 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:22.666 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:22.666 { 00:17:22.666 "nbd_device": "/dev/nbd0", 00:17:22.666 "bdev_name": "raid5f" 00:17:22.666 } 00:17:22.666 ]' 00:17:22.666 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:22.666 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.666 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:22.666 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:22.666 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:22.666 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.666 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:22.926 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:22.926 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:22.926 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:22.926 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.926 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.926 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:22.926 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:22.926 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.926 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:22.926 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.926 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.185 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:23.445 /dev/nbd0 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.445 1+0 records in 00:17:23.445 1+0 records out 00:17:23.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410751 s, 10.0 MB/s 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:23.445 13:31:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.445 13:31:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.445 13:31:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:23.445 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.445 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.445 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:23.445 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.445 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:23.705 { 00:17:23.705 "nbd_device": "/dev/nbd0", 00:17:23.705 "bdev_name": "raid5f" 00:17:23.705 } 00:17:23.705 ]' 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:23.705 { 00:17:23.705 "nbd_device": "/dev/nbd0", 00:17:23.705 "bdev_name": "raid5f" 00:17:23.705 } 00:17:23.705 ]' 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:23.705 256+0 records in 00:17:23.705 256+0 records out 00:17:23.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146719 s, 71.5 MB/s 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:23.705 256+0 records in 00:17:23.705 256+0 records out 00:17:23.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0340739 s, 30.8 MB/s 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.705 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:23.965 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.965 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.965 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.965 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.965 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.965 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.965 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:23.965 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.965 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:23.965 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.965 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:24.225 13:31:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:24.484 malloc_lvol_verify 00:17:24.484 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:24.744 69b3a0f8-ad94-4540-b116-679b845badea 00:17:24.744 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:25.003 10247fe9-509e-4092-af75-432c1253ba4a 00:17:25.003 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:25.264 /dev/nbd0 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:25.264 mke2fs 1.47.0 (5-Feb-2023) 00:17:25.264 Discarding device blocks: 0/4096 done 00:17:25.264 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:25.264 00:17:25.264 Allocating group tables: 0/1 done 00:17:25.264 Writing inode tables: 0/1 done 00:17:25.264 Creating journal (1024 blocks): done 00:17:25.264 Writing superblocks and filesystem accounting information: 0/1 done 00:17:25.264 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.264 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100150 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 100150 ']' 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 100150 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100150 00:17:25.524 killing process with pid 100150 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100150' 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 100150 00:17:25.524 13:31:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 100150 00:17:25.784 ************************************ 00:17:25.784 END TEST bdev_nbd 00:17:25.784 ************************************ 00:17:25.784 13:31:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:25.784 00:17:25.784 real 0m4.478s 00:17:25.784 user 0m6.675s 00:17:25.784 sys 0m1.182s 00:17:25.784 13:31:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.784 13:31:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:25.784 13:31:07 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:25.784 13:31:07 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:25.784 13:31:07 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:25.784 13:31:07 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:25.784 13:31:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:25.784 13:31:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.784 13:31:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:25.784 ************************************ 00:17:25.784 START TEST bdev_fio 00:17:25.784 ************************************ 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:25.784 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:25.784 ************************************ 00:17:25.784 START TEST bdev_fio_rw_verify 00:17:25.784 ************************************ 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:25.784 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:25.785 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:25.785 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:25.785 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:25.785 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:26.043 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:26.043 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:26.043 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:26.043 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:26.043 13:31:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:26.043 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:26.043 fio-3.35 00:17:26.043 Starting 1 thread 00:17:38.346 00:17:38.346 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100338: Wed Nov 20 13:31:18 2024 00:17:38.346 read: IOPS=10.2k, BW=39.7MiB/s (41.6MB/s)(397MiB/10001msec) 00:17:38.346 slat (nsec): min=17984, max=69527, avg=23534.65, stdev=3531.18 00:17:38.346 clat (usec): min=10, max=406, avg=158.26, stdev=58.27 00:17:38.346 lat (usec): min=30, max=443, avg=181.79, stdev=59.12 00:17:38.346 clat percentiles (usec): 00:17:38.346 | 50.000th=[ 155], 99.000th=[ 281], 99.900th=[ 326], 99.990th=[ 383], 00:17:38.346 | 99.999th=[ 404] 00:17:38.346 write: IOPS=10.7k, BW=41.8MiB/s (43.9MB/s)(413MiB/9862msec); 0 zone resets 00:17:38.346 slat (usec): min=8, max=251, avg=20.12, stdev= 4.96 00:17:38.346 clat (usec): min=69, max=1878, avg=356.25, stdev=60.10 00:17:38.346 lat (usec): min=89, max=2129, avg=376.37, stdev=62.03 00:17:38.346 clat percentiles (usec): 00:17:38.346 | 50.000th=[ 355], 99.000th=[ 490], 99.900th=[ 652], 99.990th=[ 1254], 00:17:38.346 | 99.999th=[ 1795] 00:17:38.346 bw ( KiB/s): min=39448, max=45656, per=98.64%, avg=42250.53, stdev=1431.49, samples=19 00:17:38.346 iops : min= 9862, max=11414, avg=10562.63, stdev=357.87, samples=19 00:17:38.346 lat (usec) : 20=0.01%, 50=0.01%, 100=10.25%, 250=37.29%, 500=52.08% 00:17:38.346 lat (usec) : 750=0.34%, 1000=0.02% 00:17:38.346 lat (msec) : 2=0.01% 00:17:38.346 cpu : usr=98.90%, sys=0.43%, ctx=27, majf=0, minf=11673 00:17:38.346 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:38.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.346 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.346 issued rwts: total=101647,105608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.346 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:38.346 00:17:38.346 Run status group 0 (all jobs): 00:17:38.346 READ: bw=39.7MiB/s (41.6MB/s), 39.7MiB/s-39.7MiB/s (41.6MB/s-41.6MB/s), io=397MiB (416MB), run=10001-10001msec 00:17:38.346 WRITE: bw=41.8MiB/s (43.9MB/s), 41.8MiB/s-41.8MiB/s (43.9MB/s-43.9MB/s), io=413MiB (433MB), run=9862-9862msec 00:17:38.346 ----------------------------------------------------- 00:17:38.346 Suppressions used: 00:17:38.346 count bytes template 00:17:38.346 1 7 /usr/src/fio/parse.c 00:17:38.346 781 74976 /usr/src/fio/iolog.c 00:17:38.346 1 8 libtcmalloc_minimal.so 00:17:38.346 1 904 libcrypto.so 00:17:38.346 ----------------------------------------------------- 00:17:38.346 00:17:38.346 00:17:38.346 real 0m11.203s 00:17:38.346 user 0m11.422s 00:17:38.346 sys 0m0.594s 00:17:38.346 13:31:18 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.346 ************************************ 00:17:38.346 END TEST bdev_fio_rw_verify 00:17:38.346 ************************************ 00:17:38.346 13:31:18 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:38.346 13:31:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7976d0a5-c538-479c-9c17-bac7813f0191"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7976d0a5-c538-479c-9c17-bac7813f0191",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7976d0a5-c538-479c-9c17-bac7813f0191",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "7969d5ea-ce79-4587-bb7e-9d8682a267b2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "277edb60-a88e-4d9c-9476-ac50a4ba1470",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f95cf58a-231c-44ae-8286-6a0bd2609edb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:38.347 /home/vagrant/spdk_repo/spdk 00:17:38.347 ************************************ 00:17:38.347 END TEST bdev_fio 00:17:38.347 ************************************ 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:38.347 00:17:38.347 real 0m11.484s 00:17:38.347 user 0m11.550s 00:17:38.347 sys 0m0.726s 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.347 13:31:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:38.347 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:38.347 13:31:18 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:38.347 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:38.347 13:31:18 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.347 13:31:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:38.347 ************************************ 00:17:38.347 START TEST bdev_verify 00:17:38.347 ************************************ 00:17:38.347 13:31:18 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:38.347 [2024-11-20 13:31:18.914092] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:38.347 [2024-11-20 13:31:18.914231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100491 ] 00:17:38.347 [2024-11-20 13:31:19.073109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:38.347 [2024-11-20 13:31:19.102814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.347 [2024-11-20 13:31:19.102941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.347 Running I/O for 5 seconds... 00:17:39.854 14460.00 IOPS, 56.48 MiB/s [2024-11-20T13:31:22.460Z] 14680.00 IOPS, 57.34 MiB/s [2024-11-20T13:31:23.395Z] 14726.33 IOPS, 57.52 MiB/s [2024-11-20T13:31:24.328Z] 14378.25 IOPS, 56.17 MiB/s [2024-11-20T13:31:24.328Z] 13898.20 IOPS, 54.29 MiB/s 00:17:42.660 Latency(us) 00:17:42.660 [2024-11-20T13:31:24.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.660 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:42.660 Verification LBA range: start 0x0 length 0x2000 00:17:42.660 raid5f : 5.01 6971.43 27.23 0.00 0.00 27545.41 271.87 25298.61 00:17:42.660 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:42.660 Verification LBA range: start 0x2000 length 0x2000 00:17:42.660 raid5f : 5.01 6945.11 27.13 0.00 0.00 27600.01 222.69 25298.61 00:17:42.660 [2024-11-20T13:31:24.328Z] =================================================================================================================== 00:17:42.660 [2024-11-20T13:31:24.328Z] Total : 13916.54 54.36 0.00 0.00 27572.67 222.69 25298.61 00:17:42.918 00:17:42.918 real 0m5.705s 00:17:42.918 user 0m10.647s 00:17:42.918 sys 0m0.222s 00:17:42.918 13:31:24 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.918 ************************************ 00:17:42.918 END TEST bdev_verify 00:17:42.918 ************************************ 00:17:42.918 13:31:24 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:43.177 13:31:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:43.177 13:31:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:43.177 13:31:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.177 13:31:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:43.177 ************************************ 00:17:43.177 START TEST bdev_verify_big_io 00:17:43.177 ************************************ 00:17:43.177 13:31:24 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:43.177 [2024-11-20 13:31:24.669693] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:43.177 [2024-11-20 13:31:24.669902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100567 ] 00:17:43.177 [2024-11-20 13:31:24.826738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:43.434 [2024-11-20 13:31:24.857127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.434 [2024-11-20 13:31:24.857220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.434 Running I/O for 5 seconds... 00:17:45.749 568.00 IOPS, 35.50 MiB/s [2024-11-20T13:31:28.354Z] 634.00 IOPS, 39.62 MiB/s [2024-11-20T13:31:29.318Z] 676.67 IOPS, 42.29 MiB/s [2024-11-20T13:31:30.256Z] 729.00 IOPS, 45.56 MiB/s [2024-11-20T13:31:30.515Z] 710.80 IOPS, 44.42 MiB/s 00:17:48.847 Latency(us) 00:17:48.847 [2024-11-20T13:31:30.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.847 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:48.847 Verification LBA range: start 0x0 length 0x200 00:17:48.847 raid5f : 5.33 357.73 22.36 0.00 0.00 8705976.09 232.52 454230.30 00:17:48.847 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:48.847 Verification LBA range: start 0x200 length 0x200 00:17:48.847 raid5f : 5.18 367.62 22.98 0.00 0.00 8556966.54 327.32 399283.09 00:17:48.847 [2024-11-20T13:31:30.515Z] =================================================================================================================== 00:17:48.847 [2024-11-20T13:31:30.515Z] Total : 725.36 45.33 0.00 0.00 8631471.31 232.52 454230.30 00:17:49.106 00:17:49.106 real 0m6.012s 00:17:49.106 user 0m11.270s 00:17:49.106 sys 0m0.216s 00:17:49.106 ************************************ 00:17:49.106 END TEST bdev_verify_big_io 00:17:49.106 ************************************ 00:17:49.106 13:31:30 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.106 13:31:30 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.106 13:31:30 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.106 13:31:30 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:49.106 13:31:30 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.106 13:31:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:49.106 ************************************ 00:17:49.106 START TEST bdev_write_zeroes 00:17:49.106 ************************************ 00:17:49.106 13:31:30 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.106 [2024-11-20 13:31:30.756091] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:49.106 [2024-11-20 13:31:30.756296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100653 ] 00:17:49.365 [2024-11-20 13:31:30.915095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.365 [2024-11-20 13:31:30.946146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.624 Running I/O for 1 seconds... 00:17:50.561 18975.00 IOPS, 74.12 MiB/s 00:17:50.561 Latency(us) 00:17:50.561 [2024-11-20T13:31:32.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.561 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:50.561 raid5f : 1.01 18976.57 74.13 0.00 0.00 6721.07 1831.57 20261.79 00:17:50.561 [2024-11-20T13:31:32.229Z] =================================================================================================================== 00:17:50.561 [2024-11-20T13:31:32.229Z] Total : 18976.57 74.13 0.00 0.00 6721.07 1831.57 20261.79 00:17:51.127 ************************************ 00:17:51.127 END TEST bdev_write_zeroes 00:17:51.127 ************************************ 00:17:51.127 00:17:51.127 real 0m1.850s 00:17:51.127 user 0m1.521s 00:17:51.127 sys 0m0.210s 00:17:51.127 13:31:32 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.127 13:31:32 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:51.127 13:31:32 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:51.127 13:31:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:51.128 13:31:32 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.128 13:31:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:51.128 ************************************ 00:17:51.128 START TEST bdev_json_nonenclosed 00:17:51.128 ************************************ 00:17:51.128 13:31:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:51.128 [2024-11-20 13:31:32.670105] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:51.128 [2024-11-20 13:31:32.670343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100691 ] 00:17:51.387 [2024-11-20 13:31:32.830156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.387 [2024-11-20 13:31:32.875388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.388 [2024-11-20 13:31:32.875522] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:51.388 [2024-11-20 13:31:32.875560] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:51.388 [2024-11-20 13:31:32.875575] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:51.388 00:17:51.388 real 0m0.416s 00:17:51.388 user 0m0.173s 00:17:51.388 sys 0m0.139s 00:17:51.388 ************************************ 00:17:51.388 END TEST bdev_json_nonenclosed 00:17:51.388 ************************************ 00:17:51.388 13:31:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.388 13:31:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:51.388 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:51.388 13:31:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:51.388 13:31:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.388 13:31:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:51.388 ************************************ 00:17:51.388 START TEST bdev_json_nonarray 00:17:51.388 ************************************ 00:17:51.388 13:31:33 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:51.648 [2024-11-20 13:31:33.139839] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:51.648 [2024-11-20 13:31:33.140089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100718 ] 00:17:51.648 [2024-11-20 13:31:33.297125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.908 [2024-11-20 13:31:33.343908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.908 [2024-11-20 13:31:33.344179] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:51.908 [2024-11-20 13:31:33.344257] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:51.908 [2024-11-20 13:31:33.344310] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:51.908 ************************************ 00:17:51.908 END TEST bdev_json_nonarray 00:17:51.908 ************************************ 00:17:51.908 00:17:51.908 real 0m0.413s 00:17:51.908 user 0m0.180s 00:17:51.908 sys 0m0.128s 00:17:51.908 13:31:33 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.908 13:31:33 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:51.908 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:51.908 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:51.908 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:51.908 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:51.908 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:51.908 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:51.908 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:51.908 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:51.908 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:51.908 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:51.908 13:31:33 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:51.908 00:17:51.908 real 0m35.013s 00:17:51.908 user 0m48.218s 00:17:51.908 sys 0m4.357s 00:17:51.908 ************************************ 00:17:51.908 END TEST blockdev_raid5f 00:17:51.908 ************************************ 00:17:51.908 13:31:33 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.908 13:31:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:51.908 13:31:33 -- spdk/autotest.sh@194 -- # uname -s 00:17:51.908 13:31:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:51.908 13:31:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:51.908 13:31:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:51.908 13:31:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:51.908 13:31:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:51.908 13:31:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:51.908 13:31:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:52.168 13:31:33 -- common/autotest_common.sh@10 -- # set +x 00:17:52.168 13:31:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:17:52.168 13:31:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:52.168 13:31:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:52.168 13:31:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:17:52.168 13:31:33 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:17:52.168 13:31:33 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:17:52.168 13:31:33 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:17:52.168 13:31:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:52.168 13:31:33 -- common/autotest_common.sh@10 -- # set +x 00:17:52.168 13:31:33 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:17:52.168 13:31:33 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:17:52.168 13:31:33 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:17:52.168 13:31:33 -- common/autotest_common.sh@10 -- # set +x 00:17:54.129 INFO: APP EXITING 00:17:54.129 INFO: killing all VMs 00:17:54.129 INFO: killing vhost app 00:17:54.129 INFO: EXIT DONE 00:17:54.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:54.389 Waiting for block devices as requested 00:17:54.389 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:54.647 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:55.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:55.585 Cleaning 00:17:55.585 Removing: /var/run/dpdk/spdk0/config 00:17:55.585 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:55.585 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:55.585 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:55.585 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:55.585 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:55.585 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:55.585 Removing: /dev/shm/spdk_tgt_trace.pid68849 00:17:55.585 Removing: /var/run/dpdk/spdk0 00:17:55.585 Removing: /var/run/dpdk/spdk_pid100030 00:17:55.585 Removing: /var/run/dpdk/spdk_pid100076 00:17:55.585 Removing: /var/run/dpdk/spdk_pid100106 00:17:55.585 Removing: /var/run/dpdk/spdk_pid100329 00:17:55.585 Removing: /var/run/dpdk/spdk_pid100491 00:17:55.585 Removing: /var/run/dpdk/spdk_pid100567 00:17:55.585 Removing: /var/run/dpdk/spdk_pid100653 00:17:55.586 Removing: /var/run/dpdk/spdk_pid100691 00:17:55.586 Removing: /var/run/dpdk/spdk_pid100718 00:17:55.586 Removing: /var/run/dpdk/spdk_pid68685 00:17:55.586 Removing: /var/run/dpdk/spdk_pid68849 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69050 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69138 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69161 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69278 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69296 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69473 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69552 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69637 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69737 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69812 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69856 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69888 00:17:55.586 Removing: /var/run/dpdk/spdk_pid69953 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70070 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70495 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70543 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70595 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70598 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70669 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70679 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70743 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70759 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70801 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70819 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70861 00:17:55.586 Removing: /var/run/dpdk/spdk_pid70879 00:17:55.586 Removing: /var/run/dpdk/spdk_pid71017 00:17:55.586 Removing: /var/run/dpdk/spdk_pid71048 00:17:55.586 Removing: /var/run/dpdk/spdk_pid71137 00:17:55.586 Removing: /var/run/dpdk/spdk_pid72296 00:17:55.586 Removing: /var/run/dpdk/spdk_pid72491 00:17:55.586 Removing: /var/run/dpdk/spdk_pid72620 00:17:55.586 Removing: /var/run/dpdk/spdk_pid73230 00:17:55.586 Removing: /var/run/dpdk/spdk_pid73425 00:17:55.586 Removing: /var/run/dpdk/spdk_pid73554 00:17:55.586 Removing: /var/run/dpdk/spdk_pid74159 00:17:55.586 Removing: /var/run/dpdk/spdk_pid74478 00:17:55.586 Removing: /var/run/dpdk/spdk_pid74607 00:17:55.586 Removing: /var/run/dpdk/spdk_pid75948 00:17:55.586 Removing: /var/run/dpdk/spdk_pid76190 00:17:55.845 Removing: /var/run/dpdk/spdk_pid76319 00:17:55.845 Removing: /var/run/dpdk/spdk_pid77649 00:17:55.845 Removing: /var/run/dpdk/spdk_pid77891 00:17:55.845 Removing: /var/run/dpdk/spdk_pid78020 00:17:55.845 Removing: /var/run/dpdk/spdk_pid79361 00:17:55.845 Removing: /var/run/dpdk/spdk_pid79795 00:17:55.845 Removing: /var/run/dpdk/spdk_pid79930 00:17:55.845 Removing: /var/run/dpdk/spdk_pid81355 00:17:55.845 Removing: /var/run/dpdk/spdk_pid81603 00:17:55.845 Removing: /var/run/dpdk/spdk_pid81733 00:17:55.845 Removing: /var/run/dpdk/spdk_pid83163 00:17:55.845 Removing: /var/run/dpdk/spdk_pid83411 00:17:55.845 Removing: /var/run/dpdk/spdk_pid83540 00:17:55.845 Removing: /var/run/dpdk/spdk_pid84970 00:17:55.845 Removing: /var/run/dpdk/spdk_pid85441 00:17:55.845 Removing: /var/run/dpdk/spdk_pid85570 00:17:55.845 Removing: /var/run/dpdk/spdk_pid85701 00:17:55.845 Removing: /var/run/dpdk/spdk_pid86108 00:17:55.845 Removing: /var/run/dpdk/spdk_pid86839 00:17:55.845 Removing: /var/run/dpdk/spdk_pid87198 00:17:55.845 Removing: /var/run/dpdk/spdk_pid87880 00:17:55.845 Removing: /var/run/dpdk/spdk_pid88327 00:17:55.845 Removing: /var/run/dpdk/spdk_pid89085 00:17:55.845 Removing: /var/run/dpdk/spdk_pid89483 00:17:55.845 Removing: /var/run/dpdk/spdk_pid91404 00:17:55.845 Removing: /var/run/dpdk/spdk_pid91842 00:17:55.845 Removing: /var/run/dpdk/spdk_pid92266 00:17:55.845 Removing: /var/run/dpdk/spdk_pid94303 00:17:55.845 Removing: /var/run/dpdk/spdk_pid94783 00:17:55.845 Removing: /var/run/dpdk/spdk_pid95288 00:17:55.845 Removing: /var/run/dpdk/spdk_pid96323 00:17:55.845 Removing: /var/run/dpdk/spdk_pid96635 00:17:55.845 Removing: /var/run/dpdk/spdk_pid97555 00:17:55.845 Removing: /var/run/dpdk/spdk_pid97867 00:17:55.845 Removing: /var/run/dpdk/spdk_pid98790 00:17:55.845 Removing: /var/run/dpdk/spdk_pid99107 00:17:55.845 Removing: /var/run/dpdk/spdk_pid99777 00:17:55.845 Clean 00:17:55.845 13:31:37 -- common/autotest_common.sh@1453 -- # return 0 00:17:55.845 13:31:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:17:55.845 13:31:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:55.845 13:31:37 -- common/autotest_common.sh@10 -- # set +x 00:17:56.146 13:31:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:17:56.146 13:31:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.146 13:31:37 -- common/autotest_common.sh@10 -- # set +x 00:17:56.146 13:31:37 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:56.146 13:31:37 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:56.146 13:31:37 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:56.146 13:31:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:17:56.146 13:31:37 -- spdk/autotest.sh@398 -- # hostname 00:17:56.146 13:31:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:56.416 geninfo: WARNING: invalid characters removed from testname! 00:18:23.004 13:32:00 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:23.004 13:32:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:24.382 13:32:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:26.972 13:32:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:28.878 13:32:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:31.413 13:32:12 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:33.319 13:32:14 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:33.319 13:32:14 -- spdk/autorun.sh@1 -- $ timing_finish 00:18:33.319 13:32:14 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:18:33.319 13:32:14 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:33.319 13:32:14 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:33.319 13:32:14 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:33.319 + [[ -n 6147 ]] 00:18:33.319 + sudo kill 6147 00:18:33.329 [Pipeline] } 00:18:33.348 [Pipeline] // timeout 00:18:33.354 [Pipeline] } 00:18:33.371 [Pipeline] // stage 00:18:33.379 [Pipeline] } 00:18:33.397 [Pipeline] // catchError 00:18:33.409 [Pipeline] stage 00:18:33.412 [Pipeline] { (Stop VM) 00:18:33.427 [Pipeline] sh 00:18:33.748 + vagrant halt 00:18:37.037 ==> default: Halting domain... 00:18:43.633 [Pipeline] sh 00:18:43.917 + vagrant destroy -f 00:18:47.224 ==> default: Removing domain... 00:18:47.237 [Pipeline] sh 00:18:47.600 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:47.610 [Pipeline] } 00:18:47.627 [Pipeline] // stage 00:18:47.634 [Pipeline] } 00:18:47.649 [Pipeline] // dir 00:18:47.655 [Pipeline] } 00:18:47.670 [Pipeline] // wrap 00:18:47.683 [Pipeline] } 00:18:47.713 [Pipeline] // catchError 00:18:47.750 [Pipeline] stage 00:18:47.752 [Pipeline] { (Epilogue) 00:18:47.762 [Pipeline] sh 00:18:48.043 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:54.621 [Pipeline] catchError 00:18:54.624 [Pipeline] { 00:18:54.637 [Pipeline] sh 00:18:54.967 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:54.967 Artifacts sizes are good 00:18:54.976 [Pipeline] } 00:18:54.992 [Pipeline] // catchError 00:18:55.006 [Pipeline] archiveArtifacts 00:18:55.015 Archiving artifacts 00:18:55.115 [Pipeline] cleanWs 00:18:55.125 [WS-CLEANUP] Deleting project workspace... 00:18:55.125 [WS-CLEANUP] Deferred wipeout is used... 00:18:55.131 [WS-CLEANUP] done 00:18:55.133 [Pipeline] } 00:18:55.146 [Pipeline] // stage 00:18:55.151 [Pipeline] } 00:18:55.165 [Pipeline] // node 00:18:55.170 [Pipeline] End of Pipeline 00:18:55.222 Finished: SUCCESS